Linked by Thom Holwerda on Mon 14th Mar 2011 18:59 UTC
Talk, Rumors, X Versus Y And over the weekend, the saga regarding Canonical, GNOME, and KDE has continued. Lots of comments all over the web, some heated, some well-argued, some wholly indifferent. Most interestingly, Jeff Waugh and Dave Neary have elaborated on GNOME's position after the initial blog posts by Shuttleworth and Seigo, providing a more coherent look at GNOME's side of the story.
Order by: Score:
in one comment
by robmv on Mon 14th Mar 2011 19:26 UTC
robmv
Member since:
2006-08-12

this comment on Aaron blog, extracted by Dave Neary tells a lot:

CSD is really not a good example of how stuff development between Canonical and GNOME should work. I’m the person at Canonical who started CSD, but never finished it.

It started as just an experimental hack, and somehow got picked up as a “Canonical project”. Once that happened my immediate manager told me to stop committing code to GNOME git and do any further work on it privately in bzr.

For me this made developing it further much more difficult, because it was an extremely large and intrusive change into GTK+ source code and my manager didn’t want upstream developers to help me with at least peer code review.

Reply Score: 5

RE: in one comment
by acobar on Mon 14th Mar 2011 19:41 UTC in reply to "in one comment"
acobar Member since:
2005-11-15

It still do not fix the problems. Fact is, gnome people did know that an effort was in course to provide a common notification area.

Now, instead of reload the process and work to provide even a temporary solution for this mess, some guys are just hunting witches.

Reply Score: 4

RE[2]: in one comment
by robmv on Mon 14th Mar 2011 19:52 UTC in reply to "RE: in one comment"
robmv Member since:
2006-08-12

yes witch hunt is bad but what could GNOME developers do if the effort in course was done privately?, become Canonical employees to have access to those discussions?, or wait until the code dump is thrown in their faces, accept it!!!. If GNOME developers decided to implement something resembling what Canonical was doing in closed doors, I do not blame them

Edited 2011-03-14 19:58 UTC

Reply Score: 5

RE[3]: in one comment
by acobar on Mon 14th Mar 2011 20:07 UTC in reply to "RE[2]: in one comment"
acobar Member since:
2005-11-15

They still knew that the KDE guys where looking for a common solution.

I don't want to repeat all was said from the different sides. If there are more than one human inside a room, it is quite obvious that unpleasantness will show up.

Fact is, FOSS desktops are by far in small quantities on a world full of hostile competitors. If the FOSS community wants to be relevant on desktop, they need to sort out the differences and cooperate.

Reply Score: 12

RE[3]: in one comment
by _txf_ on Mon 14th Mar 2011 20:10 UTC in reply to "RE[2]: in one comment"
_txf_ Member since:
2008-03-17

yes witch hunt is bad but what could GNOME developers do if the effort in course was done privately?


The spec was open and people on the kde side did make sure it was known. Gnome did not have to accept canonicals work, but could have implemented their own version following the spec if they had problems with canonicals implementation. The thing here is that the new gnome systray was developed later and some people had full knowledge that there were working implementations by kde and canonical.

Reply Score: 8

RE[3]: in one comment
by allanregistos on Thu 17th Mar 2011 03:25 UTC in reply to "RE[2]: in one comment"
allanregistos Member since:
2011-02-10

yes witch hunt is bad but what could GNOME developers do if the effort in course was done privately?, become Canonical employees to have access to those discussions?, or wait until the code dump is thrown in their faces, accept it!!!. If GNOME developers decided to implement something resembling what Canonical was doing in closed doors, I do not blame them

robmv, that is one of the *reasons* GNOME won't accept any wrong doing for *NOT* collaborating, but certainly not a *valid* reason at all for not working together, regardless if those were *DONE* in private, the effort of collaboration should be high given to promote the use of Linux on the desktop, and thus collaborating between DEs is a must to make it easy for application developers! Note: ISVs(gaming, productivity suites,etc) are the most critical component of the Linux desktop, and that compromise should *have* been made. Look, just run KDE apps on the GNOME, it's not native looking, that is an example of not collaborating, very far away from the *private* argument.

Reply Score: 1

RE: in one comment
by Richard Dale on Mon 14th Mar 2011 20:34 UTC in reply to "in one comment"
Richard Dale Member since:
2005-07-22

this comment on Aaron blog, extracted by Dave Neary tells a lot:

"CSD is really not a good example of how stuff development between Canonical and GNOME should work. I’m the person at Canonical who started CSD, but never finished it.

It started as just an experimental hack, and somehow got picked up as a “Canonical project”. Once that happened my immediate manager told me to stop committing code to GNOME git and do any further work on it privately in bzr.

For me this made developing it further much more difficult, because it was an extremely large and intrusive change into GTK+ source code and my manager didn’t want upstream developers to help me with at least peer code review.
"

It doesn't tell us a lot because CSD or 'Client Side Decorations' are not anything to do with app indicators. It was just an experiment, and as far as I can see a long way from ever being included in either Gnome or KDE. Whether such an experiment is commited to the Gnome repo or Canonical one doesn't actually matter much in practice.

Reply Score: 5

RE[2]: in one comment
by robmv on Tue 15th Mar 2011 16:01 UTC in reply to "RE: in one comment"
robmv Member since:
2006-08-12

It talks about one example of how Canonical does not prefer to develop with the community, they forced the developer in this case to work privately

Reply Score: 1

F**k this shit!
by korpenkraxar on Mon 14th Mar 2011 19:57 UTC
korpenkraxar
Member since:
2005-09-10

We the users will suffer from desktop environment quirks while the egos battle it out, watching Microsoft, Google and Apple laugh all the way to bank.

Reply Score: 14

RE: F**k this shit!
by segedunum on Mon 14th Mar 2011 20:41 UTC in reply to "F**k this shit!"
segedunum Member since:
2005-07-06

Yep, well, go tell that to Dave Neary. He now apparently doesn't think D-Bus is important, after years of work, and doesn't think that applications should be able to communicate with one another.

Reply Score: 11

v RE[2]: F**k this shit!
by dneary on Tue 15th Mar 2011 10:24 UTC in reply to "RE: F**k this shit!"
RE[3]: F**k this shit!
by Rehdon on Tue 15th Mar 2011 13:20 UTC in reply to "RE[2]: F**k this shit!"
Rehdon Member since:
2005-07-06

What have you added to the discussion? Your sarcastic comment didn't help me in the least to understand what's going on.

Rehdon

Reply Score: 4

RE: F**k this shit!
by darknexus on Mon 14th Mar 2011 22:34 UTC in reply to "F**k this shit!"
darknexus Member since:
2008-07-15

We the users will suffer from desktop environment quirks while the egos battle it out, watching Microsoft, Google and Apple laugh all the way to bank.


And this is different from how things are now in what way? They've already got enough reasons to laugh at the major F/OSS operating systems without even looking at the GUI side of things. Let's try Linux: kernel modules, no stable API or ABI, subsystems being redone and overlayed atop one another... should I go on?

Reply Score: 3

RE[2]: F**k this shit!
by korpenkraxar on Mon 14th Mar 2011 23:03 UTC in reply to "RE: F**k this shit!"
korpenkraxar Member since:
2005-09-10

I fail to see how that comment applies to what I wrote. I would say the main reason that Linux as an OS has a chance in corporate is because of the open and fast moving development of a technically great kernel.

Reply Score: 8

RE[3]: F**k this shit!
by nt_jerkface on Tue 15th Mar 2011 01:36 UTC in reply to "RE[2]: F**k this shit!"
nt_jerkface Member since:
2009-08-26

This is more of the same deluded thinking that needs to end.

The Linux kernel is not magic. Both Windows and OSX can run corporate software without any problems. There is no kernel bottleneck with Windows or OSX.

The open process has not given the Linux kernel supernatural abilities. This type of thinking reminds me of WWII history where Germans believed their special Teutonic spirit would defeat the allies even when the odds were clearly against them.

Don't bet on mysticism.

Reply Score: 3

RE[4]: F**k this shit!
by oiaohm on Tue 15th Mar 2011 01:54 UTC in reply to "RE[3]: F**k this shit!"
oiaohm Member since:
2009-05-30

This is more of the same deluded thinking that needs to end.

The Linux kernel is not magic. Both Windows and OSX can run corporate software without any problems. There is no kernel bottleneck with Windows or OSX.

Incorrect as normal. nt_jerkface. Please go and look how you do fully functional real-time support on Windows. There are some major bottlenecks inside windows just like any other OS. In fact one of the biggest disruptions to real-time on x86 archs can becoming from the motherboard. SMM http://en.wikipedia.org/wiki/System_Management_Mode Thank you intel for adding something that ruins the platform.

Also a lot of crashes on windows can be traced to kernel mode driver conflicts. Yes there is not magic.

There is also commercial software that installs on many different Linux distributions without issue as well.

The open process has not given the Linux kernel supernatural abilities. This type of thinking reminds me of WWII history where Germans believed their special Teutonic spirit would defeat the allies even when the odds were clearly against them.


Really the open process does not give the kernel supernatural abilities but it does prevent black box connecting to black box equaling untraceable failure.

There are cases of Windows 7 SP1 destroying stored data on harddrives that do directly track to black box + black box equal o my god data loss. The ide controller driver worked perfectly with Windows 7 but the SP1 has changed some of the interfaces locking methods now leading to data lose since the locking in the OS is not what the driver expects any more. Yes you don't need to change the ABI to break drivers from time to time 12 ways from sunday.

Open Process prevents these nightmares. Particular places you don't want black box + black box. Also note for core drivers like this Linux tries to have them all in 1 tree so alteration here does not lead to incompatible there.

Really supernatural abilities is believing that you can connect a stack of black boxs up with each other built to a interface spec and not have something strange to extremely bad not happen from time to time.

OS X and Windows are following a supernatural way of doing things. Yes it works for a while but one day it badly wrong.

nt_jerkface your argument against Linux in a lot of ways is baseless. Main reason why Linux does not have many commercial apps is market-share nothing else. Its not like supporting multi versions of windows is a walk in the park.

Reply Score: 6

v RE[5]: F**k this shit!
by nt_jerkface on Tue 15th Mar 2011 02:15 UTC in reply to "RE[4]: F**k this shit!"
RE[6]: F**k this shit!
by oiaohm on Tue 15th Mar 2011 02:58 UTC in reply to "RE[5]: F**k this shit!"
oiaohm Member since:
2009-05-30

"Please go and look how you do fully functional real-time support on Windows.


I think you exaggerate this feature as you have in the past but even at face value it means nothing to 99.9% of corporations or users.
"
When you do that where all the locking issues are that cause people to complain there computer is slow to start up and so on show up really badly.

So no its not a feature 99.9% of users would directly use. But its more picky to the issues. You know something has gone badly wrong when the only way to make it work is basically embed another kernel inside.

OS X and Linux kernel both can be made operate real-time. Neither has major design flaws any more preventing it. And where those flaws are are in source code you can alter. Hello OS X kernel is part open source. There is no reason why key low levels of Windows OS's could not be released open source as well to prevent nightmares. These are preventable errors.

There are reasons why I don't tar OS X completely with the same brush as Windows. At least some sane selections in the Apple camp have been done on what has to be closed source.

"Also a lot of crashes on windows can be traced to kernel mode driver conflicts. Yes there is not magic.


Linux is more likely to crash from driver conflicts, especially if it is related to video. Just ask Thom.
"
Linux driver issues are all basically in 1 spot mostly drivers in development or drivers developed in black box method so touchy to any kernel alteration. Yes I have spoke in person to Thom.

Windows they are all over the place. Some are really funny. Like insert a usb device that is not defective or tampered with into a particular machine and watch the machine reboot. Usb controller and device driver for the usb device you just plugged in conflit. Leave the device in the machine will remain in a never ending reboot cycle until its removed. Change either of the usb controller or the device drivers and problem disappears. Yes its very much with windows do you fell lucky.

"There are cases of Windows 7 SP1 destroying stored data on harddrives that do directly track to black box + black box equal o my god data loss.


Haven't heard of that. But I do know about the data loss issues with EXT4.
http://linux.slashdot.org/story/09/03/19/1730247/Ext4-Data-Losses-E...
"
How often are you going to bring this up. At the time Ext4 was not marked production ready. It was marked testing. If you are dumb enough to format a drive ext4 in 2009 and you lose your data because it was not tested who fault is that. The persons who used a filesystem before testing was complete for data.

Its like saying running a beta copy of windows and it screwed up and eats you data its Microsoft fault. Sorry no its not.

Of course if the same fault happened this year I would be out for the maintainers blood. Because a fault like that should not happen.

"nt_jerkface your argument against Linux in a lot of ways is baseless. Main reason why Linux does not have many commercial apps is market-share nothing else.


Commercial apps like VMServer that have been broken with kernel updates.
"
VMServer also was broken on OS X, Windows and Linux at different times due to kernel updates. This is not a Linux Unique issue.

Of course you don't mention that VMServer has regularly failed on all. Black box in kernel space a risk of failure. VMware in recent times have been working more with mainline Linux and the failure rates are reducing. In fact to lower rates of fail per year under Linux then and OS X or Windows.

Also you most likely want to forget the big one where all MS clients running inside VMServer failed completely due to a Windows update recently. Yes the clients become incompatible with VMServer paravirtalisation drivers.

So really you want a direct example of why not blackbox you don't have to do anything else then chart VMServer failures per platform against how black box the platform is to VMware.


As I have pointed out before the iphone had better commercial software support when it had 1/10th the marketshare of Linux. Linux could be more appealing to proprietary software and hardware developers but the people behind it simply don't give a shit. They value open source ideology over market share. That's a fact, not an opinion.


Big error. iphone core OS is same as OS X. Lot of small applications from OS X could port to iphone without much recoding.

Android has also taken off with commercial applications and it has a Linux kernel. So any kernel design selection has Zero effect on if commercial makers will make for Linux or not.


But keep defending the status quo. MS would be upset if Linus & co decided to provide a platform that was proprietary friendly. They like how Linux stays ideological and at 1%. You probably deserve a few dozen copies of Windows 7 for all the defending you do of the status quo. That and a mac mini from Apple.


Linus is the kernel maintainer. Android disproves you point solid. Application developers for over 99 percent of cases don't need drivers custom for them or anything custom in kernel.

Have distributions made it too hard at times for commercials. I have not defended that. Also redhat has more closed source commercial applications from third parties alone while iphone was a small marketshare.

So its a pure myth that Linux does not have closed source applications.

The major reason for people not being on for linux desktop pc can be addressed in 2 words. Microsoft Office. The defacto standard that has become. Platform compatibility issues.

There is another issue on Linux. Nero struck it head on. The simple problem was feature wise open source k3b had more features and better features than nero burning software. So releasing nero on Linux that does exist has a 15 dollar max price tag and almost never sells.

Linux huge pool of open source applications so not leaving much room for large numbers of commerical applications.

Google and HP(android/web os) solution break the api to give commercial applications a chance. Yes creating a short fall(ie clean slate) has shown how mythical lot of arguments to explain Linux lack of closed source applications is.

Final point what has prevented comercial applications makers for joining up and making there own unified framework to run their applications on Linux.

That right there bitter rivalry. About time you start doing the blame correctly.

Reply Score: 2

RE[7]: F**k this shit!
by nt_jerkface on Wed 16th Mar 2011 18:38 UTC in reply to "RE[6]: F**k this shit!"
nt_jerkface Member since:
2009-08-26


VMServer also was broken on OS X, Windows and Linux at different times due to kernel updates. This is not a Linux Unique issue.


Making stuff up again. I'd like to hear of these cases where VMServer was broken on Windows Server due to kernel updates.

I can source numerous breaks with kernel updates; rebuilding vmware kernel modules is a standard affair for Linux.

In fact to lower rates of fail per year under Linux then and OS X or Windows.


That's a big fat lie.

Big error. iphone core OS is same as OS X. Lot of small applications from OS X could port to iphone without much recoding.


Most iphone games are not available for OS X.

Android has also taken off with commercial applications and it has a Linux kernel. So any kernel design selection has Zero effect on if commercial makers will make for Linux or not.


Android is a completely different subject. It's designed by engineers who actually want to build a platform that encourages development of both proprietary and open source applications.

Linus is the kernel maintainer. Android disproves you point solid. Application developers for over 99 percent of cases don't need drivers custom for them or anything custom in kernel.

I'm not mixing kernel and application development problems, they just happen to intersect in some areas. The main problem with Linux distros is that their distribution systems are designed around open source.

Also redhat has more closed source commercial applications from third parties alone while iphone was a small marketshare.


Server applications that are mostly command line or output to a webpage. Those companies only need to target a single distro and their customers are server admins that don't expect the same level of usability. It's a completely different market and one that works.

So its a pure myth that Linux does not have closed source applications.


I never once claimed that it doesn't. Linux is a PITA for companies like EA Games. That's what it comes down to.

The major reason for people not being on for linux desktop pc can be addressed in 2 words. Microsoft Office. The defacto standard that has become. Platform compatibility issues.


That is a major factor but even on netbooks where MS Office is not expected Linux has caused too many problems for new users.

There is another issue on Linux. Nero struck it head on. The simple problem was feature wise open source k3b had more features and better features than nero burning software. So releasing nero on Linux that does exist has a 15 dollar max price tag and almost never sells.


Um that's nice, Linux has a lousy game selection so it's not as if all bases are covered.

Linux huge pool of open source applications so not leaving much room for large numbers of commerical applications.


Except for a docx compatible office suite, tax software, bookkeeping software, video editing, and more. The best open source applications get ported to Windows so I really don't see what your point is.

Reply Score: 2

RE[6]: F**k this shit!
by Valhalla on Tue 15th Mar 2011 08:21 UTC in reply to "RE[5]: F**k this shit!"
Valhalla Member since:
2006-01-24

Linux is more likely to crash from driver conflicts, especially if it is related to video. Just ask Thom.

If one wanted an objective opinion on Windows vs Linux, you'd expect anyone to go to Thom? Seriously?

My last Windows upgrade was from XP to XP64 and it sure came with driver hell, including video drivers. I also remember a ton of problems when people were upgrading to Vista. And no, this is no evidence of Windows being worse than Linux either, it just shows that there's no basis for your 'more likely' since there are certainly flaky drivers in both Windows and Linux.

As I have pointed out before the iphone had better commercial software support when it had 1/10th the marketshare of Linux.

Totally different market segments, I'm pretty sure companies realises that targeting Linux desktop with 'fart apps' would be a commercial suicide, just like they aren't targeting the Windows desktop with it either.

Linux has a small desktop market share, which is reflected in the amount of commercial software available for it. However the whole 'not appealing to proprietary developers thing' is just bullshit. If the market is there then so are the apps. Just look at 3D/SFX, Linux is huge there and that is why all the latest versions of commercial top applications like Maya, XSI, Mudbox, Houdini, Nuke, Renderman, etc are available for Linux.

The reason this market exists on Linux is because it's the platform of choice for pretty much every large SFX/3D company, so despite the overall small market share, Linux is extremely well supported in this segment.

Reply Score: 4

RE[7]: F**k this shit!
by lucas_maximus on Tue 15th Mar 2011 15:45 UTC in reply to "RE[6]: F**k this shit!"
lucas_maximus Member since:
2009-08-18

Linux has a small desktop market share, which is reflected in the amount of commercial software available for it. However the whole 'not appealing to proprietary developers thing' is just bullshit. If the market is there then so are the apps. Just look at 3D/SFX, Linux is huge there and that is why all the latest versions of commercial top applications like Maya, XSI, Mudbox, Houdini, Nuke, Renderman, etc are available for Linux.

The reason this market exists on Linux is because it's the platform of choice for pretty much every large SFX/3D company, so despite the overall small market share, Linux is extremely well supported in this segment.


All this proves is that Linux is primarily use by people with a high level of technical proficiency or where the core OS can be hidden by the user (such as Android Devices).

There will never be a market for it anywhere else because of core usabililty issues.

Reply Score: 2

RE[7]: F**k this shit!
by nt_jerkface on Tue 15th Mar 2011 23:39 UTC in reply to "RE[6]: F**k this shit!"
nt_jerkface Member since:
2009-08-26

If one wanted an objective opinion on Windows vs Linux, you'd expect anyone to go to Thom? Seriously?


Yes I would, he has an interest in alternative operating systems and has given Linux a fair trial on numerous occasions. From the way he writes I can tell that he wants to like Linux but has had too many problems with it. He sure as hell is no Paul Thurott.

My last Windows upgrade was from XP to XP64 and it sure came with driver hell, including video drivers.


XP to XP64 is a major upgrade. XP64 is based on Server 2003. You went from a desktop to server OS.

I also remember a ton of problems when people were upgrading to Vista. And no, this is no evidence of Windows being worse than Linux either, it just shows that there's no basis for your 'more likely' since there are certainly flaky drivers in both Windows and Linux.


Linux is far more likely to break drivers between minor upgrades. You'd have to be pretty deluded to believe otherwise. The problem is not with the actual Linux drivers but a kernel level driver model that is not designed around end users or hardware companies.

Totally different market segments, I'm pretty sure companies realises that targeting Linux desktop with 'fart apps' would be a commercial suicide, just like they aren't targeting the Windows desktop with it either.


Fart apps? There are hundreds of full length games on the iphone. Why isn't The Sims 3 available for Linux? It is on every other platform including the iphone.

However the whole 'not appealing to proprietary developers thing' is just bullshit.


No distro is trying to cater to proprietary developers. They have software distribution systems that are designed around open source. Ubuntu has been moving towards supporting proprietary developers but is still centered around the repository system which favors open source.

The reason this market exists on Linux is because it's the platform of choice for pretty much every large SFX/3D company, so despite the overall small market share, Linux is extremely well supported in this segment.


Linux is used in rendering farms but is a minority platform when it comes to desktop drawing.

Reply Score: 2

RE[2]: F**k this shit!
by oiaohm on Tue 15th Mar 2011 00:24 UTC in reply to "RE: F**k this shit!"
oiaohm Member since:
2009-05-30

"We the users will suffer from desktop environment quirks while the egos battle it out, watching Microsoft, Google and Apple laugh all the way to bank.


And this is different from how things are now in what way? They've already got enough reasons to laugh at the major F/OSS operating systems without even looking at the GUI side of things. Let's try Linux: kernel modules, no stable API or ABI, subsystems being redone and overlayed atop one another... should I go on?
"

First thing please provide a case requiring a kernel module. Remember most drivers in linux can be done kernel module or userspace. Userspace ABI for making drivers is 100 percent non changing and kernel netural. Yes 10 year old userspace drivers on Linux still work today with the latest kernel no issues. Some userspace drivers for Linux also run on Freebsd without change.

Now that you have a case requiring a kernel module please now look at what damage that module can do if it malfunctions.

Basically you want a secure stable OS. Closed source kernel modules are not compatible. So there is no requirement for a stable ABI for kernel space. Treat Linux as minix when creating drivers as a closed source third party and you are basically fine. Don't Linux will fight with you as it correct to improve performance and secuirty.

Layered subsystems even Windows has this. Subsystems being redone natural development of all OS's.

Another classic case of a person putting up a baseless argument. Linux kernel does not have the huge ammount of embed usage if creating closed source drives were tricky. Also userspace drivers put you completely out the path of the Linux GPL license and its requirements.

Being a kernel module on the other hand you can link by mistake against GPL only functions so leading to being in breach of GPL. The way Linux is design is to protect closed source developers legal ass.

Reply Score: 5

RE[3]: F**k this shit!
by nt_jerkface on Tue 15th Mar 2011 01:44 UTC in reply to "RE[2]: F**k this shit!"
nt_jerkface Member since:
2009-08-26

Linux is designed to piss off closed source driver developers, period.

They won't even keep a subset of the kernel stable for VMWare.

How long are you going to defend them mr. ham? They've gone with the F*** proprietary drivers attitude for years and Linux just sits at 1%. God forbid they try something different.

Even when hardware companies submit drivers they still have the support lag problem.

You know I bet Microsoft loves their F*** proprietary companies attitude. A match made in heaven actually. Ideologues pissing off potential partners which just keeps them on the side of Microsoft.

Reply Score: 1

RE[3]: F**k this shit!
by lucas_maximus on Tue 15th Mar 2011 15:51 UTC in reply to "RE[2]: F**k this shit!"
lucas_maximus Member since:
2009-08-18

Interfaces are not supposed to change rapidly. It's a key software engineering concept ... the interface remains the same the implementation behind it changes.

If the interface does need to change you depreciate the old one and give people time to move over.

The only reason they keep changing it is either poor design or it is a deliberate attempt to force other devs to open source their drivers (if this is true it is another case of "freedom but as we tell you).

People can bash Windows all they like but a driver written for Windows XP in 2001 will still work with Windows XP today, the same is true also with Drivers between Solaris Versions.

Linux dev's really have no excuse.

Reply Score: 2

RE[4]: F**k this shit!
by oiaohm on Tue 15th Mar 2011 21:37 UTC in reply to "RE[3]: F**k this shit!"
oiaohm Member since:
2009-05-30

Interfaces are not supposed to change rapidly. It's a key software engineering concept ... the interface remains the same the implementation behind it changes.

If the interface does need to change you depreciate the old one and give people time to move over.

The only reason they keep changing it is either poor design or it is a deliberate attempt to force other devs to open source their drivers (if this is true it is another case of "freedom but as we tell you).

Funny Linux did support Unified Unix Driver Standard. No drivers were made for it. It was binding up a kernel mode ABI for no gain. Developers insisted on calling the internal API's they could see not the stable ABI. Was even supported in the 2.6 line. To state what the hardware makers said repeatly for not using the Unified Unix Driver Standard was that the performance hit(less than 0.01 of a percent) was too great.

People can bash Windows all they like but a driver written for Windows XP in 2001 will still work with Windows XP today, the same is true also with Drivers between Solaris Versions.

Pardon. Windows XP drivers written in 2001 still work with Windows XP today. This is freeze kernel progression. Yes 2.4.37.11 that was released is the 2.4 tree that was first released in 2001 that has a stable kernel abi across the 2.4 line has basically no closed source drivers.

Now lets move on to vista. Vista just like Linux kernel 2.6 is pushing lots of drivers to user-space for the same basic reasons no need to tie hands behind back.

Linux dev's really have no excuse.


Problem is people like you are blind.

http://www.kernel.org/ Please note the kernels tagged longterm. They will be API compatible for as long as XP if not longer. Why not ABI. Something interesting. Turns out you must use the same compiler to have ABI. Reason why MS shipped driver development kits containing a different compiler to the normal Windows SDK. If you don't have the same compiler you must wrap the API that does cost performance.

Anyone who builds open solaris themselves with different compilers has also found out that from time to time solaris closed source drivers don't run stable either. So this is a selection between stable and not stable.

Userspace is already wrapped with the syscall framework. Userspace is simpler to provide compatibility libraries. Something people are not aware is some of the old syscalls on linux called from userspace are not processed by the Linux kernel but redirected to userspace libraries. So historic compatibility does not mean kernel bloat.

A userspace driver frameworks its far more stable. Drivers written in them like cups drivers you can pick up cups drivers from 1880~ from a few different unix systems and use them on current Linux by using loaders. 1993 from Linux system and use them as well.

Basically userspace proper solves the kernel to kernel issue. And driver support from hardware makers has been as bad as it always was.

I am sorry but the userspace framework on scale of stable is massive far passing the time frame any Kernel base ABI could offer. Its done in such away they never need to be revised in a way to break backwards compatibility. At worst redirect some syscalls to userspace for userspace handling.

Reply Score: 3

RE[5]: F**k this shit!
by oiaohm on Tue 15th Mar 2011 22:10 UTC in reply to "RE[4]: F**k this shit!"
oiaohm Member since:
2009-05-30

http://www.yl.is.s.u-tokyo.ac.jp/~tosh/kml/

Something I have not mentioned so far. Is that the userspace-kernel bridge in Linux can be turned into a kernel ABI with a third party patch. Keeping many advantages

Basically shutup asking for a kernel ABI. Linux developers are providing driver makers with a Highly stable ABI that can be made operate in kernel mode if required. There are not enough drivers to justify Kernel Mode Linux to be integrated mainline.

Reply Score: 3

RE[6]: F**k this shit!
by oiaohm on Tue 15th Mar 2011 23:14 UTC in reply to "RE[5]: F**k this shit!"
oiaohm Member since:
2009-05-30

I really should have though all this out.

Other thing about Linux userspace drivers are they are really forever drivers in many ways.

Qemu can wrap a userspace driver to run on basically any cpu type. So maker only gives my 32 bit x86 and I have an arm processor no problems. Yes inside qemu its runs a bit slower but at least the driver will work.

chroots/openvz zones can be created to provide an old system appearance to a userspace driver.

And of course Linux kernel userspace syscall off load. So kernel can drop syscalls and userspace never needs to know since they are now being provided from userspace.

Not something hardware makers particularly like the idea. Once Linux has a open source or userspace driver it has the possibility of having that driver for every CPU and ARCH type linux supports.

Yes Linux design is going after the same thing MS .Net OS dreams have been going after.

Of course being userspace code has not excluded it from being loaded kernel space. And userspace kernel is kernel netural. So what is the problem. Linux had a problem that every other OS has suffered from through time and they design a solution. Most likely the only solution that can proper work in all cases unless you go to something like a java or .net OS core.

Of course there is a disadvantage of userspace drivers. No nasty stunts can be done to stop reverse engineering.

Reply Score: 2

RE[5]: F**k this shit!
by nt_jerkface on Tue 15th Mar 2011 23:16 UTC in reply to "RE[4]: F**k this shit!"
nt_jerkface Member since:
2009-08-26

A userspace driver frameworks its far more stable.


So all drivers are better off in userspace?

Reply Score: 2

RE[6]: F**k this shit!
by oiaohm on Wed 16th Mar 2011 03:38 UTC in reply to "RE[5]: F**k this shit!"
oiaohm Member since:
2009-05-30

"A userspace driver frameworks its far more stable.


So all drivers are better off in userspace?
"

nt_jerkface good question.

Some drivers at moment will perform badly at the moment done userspace. Mostly due to context switches. But if there was enough demand merging of kernel mode linux would become important. That basically fixes those performance issues completely.

There are a small number like memory management cpu initialization, video card first initialization that simple cannot be done as userspace basically the ones that a base kernel image of linux will not run without. Yes they are still drivers even that they are in the single file kernel blob.

All the module loadable drivers other than a very small percentage(The ones that have todo operations from ring 0 like virtualisation ie ring 0 is not on offer to user-space for very good reasons) would be in most cases be better off done using the userspace api's.

In fact some drivers in the Linux kernel are being tagged to be ported to the userspace api simply to get rid of them out of kernel space.

Most important thing about being done off the userspace api is that if you are having kernel crashes and you suspect a driver if they are are done in the userspace api you could basically switch to a microkernel model. Run the driver in userspace. Application or driver crashing in userspace normally does not equal complete system stop so making it a little simpler to find that one suspect driver.

So why are they not. fuse cuse and buse are the last 8 years. So drivers prior to that were done in kernel space because there really was no other way that would work.

Next is kernel space does have some advantages. Those advantages do explain why the API in there is unstable.

Main reason for using the kernel space over userspace is speed. For the speed there is a price Kernel space driver can bring the system to a grinding halt with a minor error. No such thing as a free lunch.

Due to the fact that kernel space is for speed. Any design error has to be removable at any time. So the API in kernel space are in flux. BKL was a classic example. Good idea a the time. Many years latter it had to go. Stable Kernel ABI based on internal kernel structs would have prevented that removal as fast as it was. Why you are using Kernel mode explains why Linux kernel mode is in flux.

So the deal you choose between with user-space and kernel mode basically is.

Userspace highly stable, issueless with future versions of kernel, unless something really rare happens never crashes you computer(ie driver might get restarted) but slightly slower depending on the device this may even be undetectable, can be cross platform and cross arch at basically the same speed.

Kernel Space. Fast, Can crash your computer with even the smallest error, Will have issues with future versions of kernel at some point due to ABI/API/Locking changes, normally not cross arch or cross platform if cross arch or cross platform normally as slow as the userspace api used from kernel mode or worse slower than using the userspace interfaces in the first place(what is the point).

Note those deals apply to Windows Linux and Solaris to different amounts. Linux with its faster kernel major version cycles shows up will have issues with future versions of kernel more often. Lot of people remember getting Windows 7 and XP before it and finding a stack of devices no longer worked safely ie add the driver upset computer.

Due the risks in kernel space is why Linux people want the source code in there so it can be fully audited. Basically do you like the Blue/Red screen of death or Linux kernel panic. If no you really should agree with what the Linux developers want. Even MS is giving up on the idea. Most of the gains of kernel space are not with the loss in stability.

The way I put is that closed source driver makers wanting to use kernel space are like carrying possible drug using gear into an airport and trying to refuse having you ass and other private areas inspected. Basically inspection should be expected.

Do I expect driver makers or any poor person who has to be inspected at a airport to be happy about it. No I don't that would be asking too much. But they should be understanding why they got what they did.

Reply Score: 2

RE[7]: F**k this shit!
by nt_jerkface on Wed 16th Mar 2011 18:12 UTC in reply to "RE[6]: F**k this shit!"
nt_jerkface Member since:
2009-08-26

nt_jerkface good question.


It was a rhetorical question that both you and I know the answer to. GPU drivers are the big stinking elephant in the room and there is also the reality that most hardware companies would prefer to write a proprietary binary driver for a stable interface.

The way I put is that closed source driver makers wanting to use kernel space are like carrying possible drug using gear into an airport and trying to refuse having you ass and other private areas inspected. Basically inspection should be expected.


Why not let users decide? The security paranoid can use basic open source drivers and everyone else can use proprietary drivers.

Like most defenses of the Linux driver model you ignore problems resulting from that model that users continually face.

The user-space interface is only mostly stable. Every year you defend the Linux driver model and every year a new Linux user gets some trivial device broken from an update and goes back to Windows or OSX.

There is no Linux distro that can be trusted to auto-update itself along with a typical desktop application suite and a basic set of peripherals. No Linux distro has a reliable record. Linux would have far more than 1% if the people at the top were concerned with building a system that finds a balance between the needs of users, open source advocates and hardware companies. But Linux is designed by open source advocates with little regard for users or hardware companies. Linus doesn't want proprietary drivers in his precious kernel and will gladly sacrifice marketshare to achieve this goal.

Reply Score: 2

RE[4]: F**k this shit!
by allanregistos on Thu 17th Mar 2011 02:46 UTC in reply to "RE[3]: F**k this shit!"
allanregistos Member since:
2011-02-10


People can bash Windows all they like but a driver written for Windows XP in 2001 will still work with Windows XP today, the same is true also with Drivers between Solaris Versions.

Linux dev's really have no excuse.

Hi, please read:
http://www.kernel.org/pub/linux/kernel/people/gregkh/misc/2.6/stabl...

Reply Score: 2

RE[5]: F**k this shit!
by nt_jerkface on Thu 17th Mar 2011 03:16 UTC in reply to "RE[4]: F**k this shit!"
nt_jerkface Member since:
2009-08-26

"
People can bash Windows all they like but a driver written for Windows XP in 2001 will still work with Windows XP today, the same is true also with Drivers between Solaris Versions.

Linux dev's really have no excuse.

Hi, please read:
http://www.kernel.org/pub/linux/kernel/people/gregkh/misc/2.6/stabl...
"

And the resounding success of Linux on the desktop proves he was right. Drivers are never a problem in Linux. How dare anyone question Greg KH!

http://ubuntuforums.org/showthread.php?t=1678203
http://ubuntuforums.org/showthread.php?t=1485772
http://ubuntuforums.org/showthread.php?t=1700897
http://ubuntuforums.org/showthread.php?t=948919

Reply Score: 2

RE[6]: F**k this shit!
by smitty on Thu 17th Mar 2011 04:02 UTC in reply to "RE[5]: F**k this shit!"
smitty Member since:
2005-10-13

And the resounding success of Linux on the desktop proves he was right. Drivers are never a problem in Linux. How dare anyone question Greg KH!

It's doing better than the BSDs. Anyone who claims that the reason Linux hasn't had a "year of the desktop" yet because of the lack of a stable api for kernel drivers should first have to explain why that hasn't worked for other OSs.

Reply Score: 4

RE[6]: F**k this shit!
by Nth_Man on Thu 17th Mar 2011 10:05 UTC in reply to "RE[5]: F**k this shit!"
Nth_Man Member since:
2010-05-16

> > Hi, please read:
> > http://www.kernel.org/pub/linux/kernel/people/gregkh/misc/2.6/stabl...

> Drivers are never a problem in Linux. How dare anyone question Greg KH!

> http://ubuntuforums.org/showthread.php?t=1678203
> http://ubuntuforums.org/showthread.php?t=1485772
> http://ubuntuforums.org/showthread.php?t=1700897
> http://ubuntuforums.org/showthread.php?t=948919 [/q]

You talk like if you had understood what Greg KH wrote. If someone reads what Greg wrote, he sees:

Releasing a binary driver for every different kernel version for every distribution is a nightmare, and trying to keep up with an ever changing kernel interface is also a rough job.

Simple, get your kernel driver into the main kernel tree (remember we are talking about GPL released drivers here)

and
If your driver is in the tree, and a kernel interface changes, it will be fixed up by the person who did the kernel change in the first place. This ensures that your driver is always buildable, and works over time, with very little effort on your part.

The very good side affects of having your driver in the main kernel tree are:
- The quality of the driver will rise as the maintenance costs (to the original developer) will decrease.
- Other developers will add features to your driver.
- Other people will find and fix bugs in your driver.
- Other people will find tuning opportunities in your driver.
- Other people will update the driver for you when external interface changes require it.
- The driver automatically gets shipped in all Linux distributions without having to ask the distros to add it.

And later you write links about complains to closed source, proprietary drivers. Don't you find it strange?

Reply Score: 2

RE[7]: F**k this shit!
by oiaohm on Thu 17th Mar 2011 10:14 UTC in reply to "RE[6]: F**k this shit!"
oiaohm Member since:
2009-05-30



You talk like if you had understood what Greg KH wrote. If someone reads what Greg wrote, he sees:
Releasing a binary driver for every different kernel version for every distribution is a nightmare, and trying to keep up with an ever changing kernel interface is also a rough job.

Simple, get your kernel driver into the main kernel tree (remember we are talking about GPL released drivers here)

and
If your driver is in the tree, and a kernel interface changes, it will be fixed up by the person who did the kernel change in the first place. This ensures that your driver is always buildable, and works over time, with very little effort on your part.

The very good side affects of having your driver in the main kernel tree are:
- The quality of the driver will rise as the maintenance costs (to the original developer) will decrease.
- Other developers will add features to your driver.
- Other people will find and fix bugs in your driver.
- Other people will find tuning opportunities in your driver.
- Other people will update the driver for you when external interface changes require it.
- The driver automatically gets shipped in all Linux distributions without having to ask the distros to add it.

And later you write links about complains to closed source, proprietary drivers. Don't you find it strange? [/q]

Also he forgot my comment before that about the long term kernel that binary drivers are tested against.

That distributions are truly making life hard for end users not agreeing to reduce the numbers of kernels out there for binary drivers.

Edited 2011-03-17 10:15 UTC

Reply Score: 2

RE[2]: F**k this shit!
by allanregistos on Tue 15th Mar 2011 04:50 UTC in reply to "RE: F**k this shit!"
allanregistos Member since:
2011-02-10

"We the users will suffer from desktop environment quirks while the egos battle it out, watching Microsoft, Google and Apple laugh all the way to bank.


And this is different from how things are now in what way? They've already got enough reasons to laugh at the major F/OSS operating systems without even looking at the GUI side of things. Let's try Linux: kernel modules, no stable API or ABI, subsystems being redone and overlayed atop one another... should I go on?
"

That's the Kernel no stable API nonsense?
http://www.kernel.org/pub/linux/kernel/people/gregkh/misc/2.6/stabl...

Reply Score: 3

RE[3]: F**k this shit!
by oiaohm on Tue 15th Mar 2011 05:09 UTC in reply to "RE[2]: F**k this shit!"
oiaohm Member since:
2009-05-30

"[q]We the users will suffer from desktop environment quirks while the egos battle it out, watching Microsoft, Google and Apple laugh all the way to bank.


And this is different from how things are now in what way? They've already got enough reasons to laugh at the major F/OSS operating systems without even looking at the GUI side of things. Let's try Linux: kernel modules, no stable API or ABI, subsystems being redone and overlayed atop one another... should I go on?
"

That's the Kernel no stable API nonsense?
http://www.kernel.org/pub/linux/kernel/people/gregkh/misc/2.6/stabl... [/q]

This is a key block of text.
This is being written to try to explain why Linux does not have a binary kernel interface, nor does it have a stable kernel interface. Please realize that this article describes the _in kernel_ interfaces, not the kernel to userspace interfaces. The kernel to userspace interface is the one that application programs use, the syscall interface. That interface is _very_ stable over time, and will not break. I have old programs that were built on a pre 0.9something kernel that still works just fine on the latest 2.6 kernel release. This interface is the one that users and application programmers can count on being stable.

Notice this is very clear to point out kernel to userspace has been stable from 0.9 Linux something. So there is a stable ABI/API. Of course everyone raising the arguement forgets the existance of these.
http://lwn.net/Articles/296388/ CUSE
http://fuse.sourceforge.net/ fuse
and of course Buse.

There are all ways to create drivers using the stable kernel to user-space interface. Now the question becomes why do you need a Kernel ABI in the first place other than the Userspace one?

Reply Score: 4

RE[4]: F**k this shit!
by allanregistos on Tue 15th Mar 2011 05:28 UTC in reply to "RE[3]: F**k this shit!"
allanregistos Member since:
2011-02-10


There are all ways to create drivers using the stable kernel to user-space interface. Now the question becomes why do you need a Kernel ABI in the first place other than the Userspace one?


Education. But I do not think that Hardware manufacturers are ignorant of this information. I am not a developer of this level not to mention being a driver developer, but why is it that most Printer/Scanner/peripherals etc have no Linux driver by default, if they can write it easily using the userpace without touching GPL?

Reply Score: 1

RE[5]: F**k this shit!
by oiaohm on Tue 15th Mar 2011 10:46 UTC in reply to "RE[4]: F**k this shit!"
oiaohm Member since:
2009-05-30

"
There are all ways to create drivers using the stable kernel to user-space interface. Now the question becomes why do you need a Kernel ABI in the first place other than the Userspace one?


Education. But I do not think that Hardware manufacturers are ignorant of this information. I am not a developer of this level not to mention being a driver developer, but why is it that most Printer/Scanner/peripherals etc have no Linux driver by default, if they can write it easily using the userpace without touching GPL?
"

Printer and Scanner drivers on all platforms are user space. No kernel mode part on any. So yes there is no excuse other than deciding that Linux is too small of market share to support.

Another thing a lot of people are not aware is that OS X and Linux use the same printer service struct. Yet there are drivers for OS X and no drivers for the same printer for Linux from the hardware maker. Cannon excuse is lack of means provide userspace applications. But really the two drivers would be basically the same source code with minor alterations.

Also some is MS interference. Before MS Novell deal. Novell was working on providing the interface to take MS Windows drivers for printers and use them directly under Linux so solving that problem. They stopped work on that the very day the contract with Microsoft was signed.

Also under Vista+ there is technically no reason for usb devices to have a driver in kernel space at all. Yet hardware makers are still releasing drivers using kernel space under windows for Vista and 7 for usb drivers so causing unstable states and bad outcomes. Reason its cheaper they don't have to redo there code base from XP. Yes the issue of lack of or badly written is not restricted to Linux.

There is also the nightmare under windows where 1 device can have 50 different drivers even that its the same device it all depends who brand is on the device what driver you get.

Yes Linux is the same with USB drivers as windows no need for kernel space. PCI and PCI-E don't have to be kernel space either under Linux. But windows that might be asking too much. PCI and PCI-E passed to userspace is why virtual machines under Linux can pass PCI and PCI-E cards to contained OS untouched.

In the open source world if the hardware makers will release the specs most cases they don't have to write drivers. Also releasing the specs would allow consumers to see that X product here for 200 dollars is the same product over here for 20 dollars just with a different company brand on it. Yes Consumers are being ripped off all the way to the bank and getting inferior quality drivers at the same time. Yes this is a major issue why some hardware makers don't want to work with open source. It unmasks the truth.

As soon as you start talking kernel space and drivers. You are talking a pandoras box where a lot of parties don't want you to start looking. Because a lot of the unstable you suffer from is magically explained. And the fact you are being ripped of by particular hardware makers becomes clear as day.

It very simple to try to just blame Linux for its poor driver support. The big thing to be aware is that its not Linux alone. Its hardware makers who don't want the truth out there of there hardware.

Like every HP scanner and printer on the shelf today has perfectly support under Linux today as open source. Every brother printer on the shelf you could buy today has perfect support for linux as a binary driver. Ok bit of a tricky driver to install but its provided.(Yes they need to work on there install method language so a normal human could understand it).

These are not reversed support. Final argument is lack of market share. But with the slap shot support I see for Vista and 7. I Have to say 90 percent of the reason is profit making nothing more at the buyers expense.

Yes that Linux can do it does not matter they just will not spend the money or give the open source world the specs so they can do it themselves.

Nt supporters like nt_facejerk don't really want people to be aware of the truth. Since people aware of the truth might start asking for full interface specs to products and other nasty things.

Interesting thing here as well. I have found hardware makers who provide open source drivers more often maintain better quality drivers for Windows. Yes that a bit of hardware runs under Linux should be seen as a good sign that you will have less trouble under windows with that device compared to the device sitting next to it that does not support Linux.

Reply Score: 5

RE[6]: F**k this shit!
by phoenix on Tue 15th Mar 2011 15:34 UTC in reply to "RE[5]: F**k this shit!"
phoenix Member since:
2005-07-11

Another thing a lot of people are not aware is that OS X and Linux use the same printer service struct.


Known as CUPS.

Yet there are drivers for OS X and no drivers for the same printer for Linux from the hardware maker.


A CUPS PPD file works on any system running CUPS, so long as no extra binaries/filters are needed. I've used MacOS X PPD files on Linux and on FreeBSD without any issues.

Cannon excuse is lack of means provide userspace applications. But really the two drivers would be basically the same source code with minor alterations.


A "CUPS driver" that needs extra binaries is not a true CUPS printer driver, and it's those drivers (like SpliX, HPLIP, etc) that cause problems with cross-OS support.

Reply Score: 5

RE[7]: F**k this shit!
by oiaohm on Tue 15th Mar 2011 21:43 UTC in reply to "RE[6]: F**k this shit!"
oiaohm Member since:
2009-05-30

"Another thing a lot of people are not aware is that OS X and Linux use the same printer service struct.


Known as CUPS.

Yet there are drivers for OS X and no drivers for the same printer for Linux from the hardware maker.


A CUPS PPD file works on any system running CUPS, so long as no extra binaries/filters are needed. I've used MacOS X PPD files on Linux and on FreeBSD without any issues.

Cannon excuse is lack of means provide userspace applications. But really the two drivers would be basically the same source code with minor alterations.


A "CUPS driver" that needs extra binaries is not a true CUPS printer driver, and it's those drivers (like SpliX, HPLIP, etc) that cause problems with cross-OS support.
"
Please read CUPS specs. CUPS printer drivers are allows to include platform dependent interface parts to talk protocols these are called cup filters.

Cups driver without its matching Cups filter requirements is a paperweight.

Yes its still a Cups driver if its dependent on a filter. Since filter support is build into cups driver design.

Reply Score: 2

v RE[6]: F**k this shit!
by Almost_Freetarded on Wed 16th Mar 2011 20:50 UTC in reply to "RE[5]: F**k this shit!"
RE[7]: F**k this shit!
by oiaohm on Thu 17th Mar 2011 00:58 UTC in reply to "RE[6]: F**k this shit!"
oiaohm Member since:
2009-05-30

"
Also releasing the specs would allow consumers to see that X product here for 200 dollars is the same product over here for 20 dollars just with a different company brand on it.


Bullshit! What the heck have you in mind? I don't know any hardware that fits into your description:

Graphics cards? No. The costly part is the chip, and the chip is always either from Intel, Nvidia or AMD.

Hard disks? No
Chipsets? Same as with graphics cards.

Printers? Sorry no, there IS a difference between a $200 cannon and some $20 no name.
"
$20 to $200 happens in webcams and secuirty cams at times. Where in fact they are exactly the same chip exactly the same optics from exactly the same factory. Even the case is from the same factory only thing different is the branding.

Also scanners are another location with same chipset scaning head metal frame of scanner and circuit board are shipped to customer in a different plastic case with different branding on it at major-ally different prices.

Note all these devices have something in common when you read the spec sheets they are normally devoid of the what inside. They have physical shape and number of dpi/pixels. Yes small items people are not paying attention to they are getting ripped off on.

Lot of DVD-Rom and Blueray drives branded to be from different makers are in fact out the same factory and tested to the same quality standards so that 20 dollar price difference on the self for the drive could be nothing more than a brand payment. Same with floppy drives and particular power supplies.

Items like video cards, motherboards, harddrives and printers people normally pay more attention to. But even in video cards reading makers specs for the chip on them would warn you that particular video cards that are running faster then another card are in fact overclocked and is going to have a short life.

So yes you are paying more money for a video card that will burn out sooner. When another card that is a different chipset next to it that is running at the same speed same price is not overlocked so will last.

Consumer is getting ripped off many ways its not funny because they really don't have the information to make correct selections.

"Also some is MS interference. Before MS Novell deal. Novell was working on providing the interface to take MS Windows drivers for printers and use them directly under Linux so solving that problem. They stopped work on that the very day the contract with Microsoft was signed.


Boohoo. Where is the basement developer army? So Loonix needs ebil Nov€ll now? According to Queen Pamela Jones, Nov€ll was ebil before they signed the deal with M$ anyway. How can you use something so tainted?
"
Staff working on it were pre Novell from SUSE. Not everyone in Novell is evil.

"There is also the nightmare under windows where 1 device can have 50 different drivers even that its the same device it all depends who brand is on the device what driver you get.


And most of them work. Where is the problem actually? As if it is a common scenario that users have so many versions of the same hardware.
"
In fact with webcams of the same chipset most of them don't work properly(Ie the work enough that the web cam appears to function). They cause random untraceable computer crashes. Lot of these I have solved for windows users by going to Linux and finding out what Linux driver runs the device then making way back to the company that made the chip inside the device and get there current driver and tweak its usb ids so it works under windows. Yes ATI and Nvidia are 100 times simpler to make it back to the chipset maker and get the generic driver that works.

This is your problem. People complain about windows being unstable. Lot of cases its these poor quality drivers or computer was assembled without doing basic things like checking ram with a memory testing software as per many ram makers instructions in fact.

"Nt supporters like nt_facejerk don't really want people to be aware of the truth. Since people aware of the truth might start asking for full interface specs to products and other nasty things.


Stop deluding yourself. What full interface specs? The people who care know what needs to be known already. Haven't you never noticed the tons of hardware sites like Tomshardware.com? And the ones who don't bother won't bother anyway.
"
Tomshardware does not tell you the maker of the chipset inside. Even the groups that disassemble the devices in a lot of cases cannot tell you due to chip over branding or production of chip not being done by party that designed it. The Full interface specs might start being asked by Tomshardware and others so users can get generic drivers that work. Not snapshots of generic drivers that have not been updated. Of course it would be simpler of course if device makers just started being more truthful the design of X device is from here refer to them for third party drivers.

"I have found hardware makers who provide open source drivers more often maintain better quality drivers for Windows


Easy. It's because vendors who can afford to support Windows and Linux are usually not some poor mom&pop shop.

Are you seriously claiming that they gain some leet programming skillz just by creating a Linux driver?
"
No I am not. It not leet programming skills. Its called peer review. Key point of science. Without peer review you are talking alchemy methods.

Producing dependable results peer review is a good process. Errors that otherwise would be missed get spoted. Yes lot of cases the windows closed source driver and the Linux open source driver share common code. So any peer review error seen in the open source driver ends up fixing the windows one. Also since open source driver exists.

The difference exists due to processes that happen. Does not matter how large or small the company is. You can even seen a difference inside the same company between 2 devices made by them one with and one without open source drivers made by them. Just you have not been looking.

Company deciding to kill support for profit also is reduced by open source driver existing. Ie don't make a windows 7 drive for X device so people have to buy Y device. Many or many not work if an open source driver exists but if it don't work it will make them look really foolish. So this leads to devices with open source drivers have longer life span usable on windows and maintained longer.

This is the big problem. People don't want to get open source drivers help windows users as well as Linux OS X and other platforms.

That is the simple facts of the mess.

Reply Score: 2

RE[4]: F**k this shit!
by jabjoe on Tue 15th Mar 2011 11:21 UTC in reply to "RE[3]: F**k this shit!"
jabjoe Member since:
2009-05-06

BUSE? Block device in User SpacE? Cool, didn't know that one. :-)

Reply Score: 3

RE: F**k this shit!
by searly on Tue 15th Mar 2011 10:15 UTC in reply to "F**k this shit!"
searly Member since:
2006-02-27

"Microsoft, Google and Apple laugh all the way to bank."
I think that Microsoft, Google or Apple are largely indifferent Gnome / KDE or Unity ... the Linux Desktop is (at the moment) just not a serious threat to them (now the mobile space with Linux/Android etc. is a different matter). And before I am shot down ... I use Linux/Gnome exclusively, am very happy with it and think personally it to be a better desktop OS than Windows 7 (not sure of Mac OSX, haven't really used it and am not prepared to pay the apple premium) ... I just think that there is not the mindshare in the general public in terms of Linux to be real threat ...

Reply Score: 1

Also this is smoke screen by Canonical
by oiaohm on Mon 14th Mar 2011 20:23 UTC
oiaohm
Member since:
2009-05-30

Create a stack of infighting and they hope we will forgot that Canonical is redirecting the amazon music store to pay them 75 percent and 25 percent to Gnome that use to be 100 percent Gnome.

Yes this is something that has to be settled. Big problem was trying to shove blame on Freedesktop.org for not working. Everything was done by the book and right at Freedesktop.org. Freedesktop.org even has an way for anyone to dispute. So at no time can Freedesktop.org really be blamed.

Now next question. Freedesktop should be a place with multi spot fires of people in dispute if its fully working.

Items that have not been submitted to Freedesktop.org that should be. KDE and Gnome configuration systems. Why two places to store the same data is very stupid and lead to incompatibilities. This duplication removal is one of the reasons Freedesktop exists is to provide a place to sort it out. Lack of what should have been submitted has made Freedesktop lack the flame fights it should have.

KDE and Gnome neither has the right to be on a high horse over Freedesktop. Yes KDE has been better supporting but still KDE you have more todo.

Canonical thinking that Freedesktop.org exists to make a unified Linux desktop that is User friendly. Your lack of involvement speaks volumes.

True is the charge against Canonical of working in locations that require copyright assignment to Canonical so locking out a section of the open source development community. There are a lot of developers not allowed todo copyright assignment

True is that Canonical used blackmail to get what they want with amazon music store. Canonical please clean up your own house before throwing more stones. Because stones might be thrown back.

Reply Score: 2

molnarcs Member since:
2005-09-10


True is the charge against Canonical of working in locations that require copyright assignment to Canonical so locking out a section of the open source development community


That's true and all, and also, completely irrelevant to the issue at hand. Mark's and Seigo's problem is that GNOME refuses to collaborate on enhancing interoperability across the whole F/LOSS landscape. One means of collaboration is via freedesktop.org, and by establishing low-level frameworks (think of D-BUS as a shining example of something that benefited ALL F/LOSS desktop environments) and specs. When it comes to these specs (or to the StatusNotifier specification) Canonical, of course, does not require copyright assignment.

Reply Score: 5

Duck and Cover
by segedunum on Mon 14th Mar 2011 20:31 UTC
segedunum
Member since:
2005-07-06

I don't get the point of Jeff Waugh. For years he's seemed to have been a corrosive influence, villified by people even in the community he claims to represent and for some reason he miraculously turns up over this issue and starts dishing out his pearls of wisdom.

I really don't know what else could have been done to get StatusNotifier into Gnome. It was discussed at length, it didn't just appear out of nowhere, the Gnome devs asked for some changes they seemed quite receptive to and when they were duly accommodated there was silence and it was duly rejected with few, if any, reasons given. That's usually a classic tactic. You ask for changes that you hope won't be done and then you stonewall when they are.

I think we can all agree that Canonical have got a lot of things a bit off skew, but all I know is that the stuff that Canonical put into Unity and KDE put into KDE actually works and it hasn't hurt anyone.

This little gem in Dave Neary's blog tells you all you need to know about how they really feel about collaboration:

This is not a compelling problem statement. No user ever had a problem because notifications didn’t use D-Bus.

I don't know what you can say to that. D-Bus was initiated many years ago, by a prominent Gnome developer no less, to ensure that apps and desktops could communicate with each and work, thus helping those very same users. KDE embraced and uses D-Bus extensively. I have no idea what's been going on with Gnome. As far as I can see they've reimplemented it several times with little in the way of results.

As for the Freedesktop nonsense, Seigo and many others have been trying to get Freedesktop working for years and haven't been helped one iota. Mainly Gnome developers then turn around every time and say that it is broken as a justification for not putting in any input. It will never be fixed, mark my words, but Gnome not being a part of it might not be very important anyway.

The distasteful thing is that various Gnome devs don't just come out and say "Look' we don't care about Freedesktop or collaboration and it's not worth our time". They paint their position as the exact opposite, reject anything related to it and then spin like crazy to try and tell everyone how they have 'misunderstood', certain things weren't done in a 'He said, she said' type exchange with people (very important that things can't be proved) and try and paint another different picture of what went on on mailing lists because they know the discussions are too broad to nail them down.

I've never got this distasteful attitude that seems to exist at the core of Gnome. It certainly doesn't happen everywhere in the project or many of its applications, but it does happen at the core of it.

Reply Score: 13

RE: Duck and Cover
by drcouzelis on Mon 14th Mar 2011 20:57 UTC in reply to "Duck and Cover"
drcouzelis Member since:
2010-01-11

This is not a compelling problem statement. No user ever had a problem because notifications didn’t use D-Bus.


I think you may have missed the point of this quote. Dave isn't talking about collaboration. Instead, he's talking about how to write a good specification, which starts with defining the problem statement.

Dave is saying, in a slightly humorous way, that "Notifications don't use D-Bus" is not a proper problem statement. I completely agree.

Did you understand what he wrote differently than I did?

Reply Score: 4

RE[2]: Duck and Cover
by _txf_ on Mon 14th Mar 2011 21:07 UTC in reply to "RE: Duck and Cover"
_txf_ Member since:
2008-03-17

Dave is saying, in a slightly humorous way, that "Notifications don't use D-Bus" is not a proper problem statement. I completely agree.


Except that wasn't the problem and nobody ever said that it was. The problem was Xembed (which was unflexible and designed for a different era). The solution was to use dbus for ipc (would he rather people reimplement a new ipc just for systrays!?).

Reply Score: 3

RE[3]: Duck and Cover
by Kitty on Mon 14th Mar 2011 21:51 UTC in reply to "RE[2]: Duck and Cover"
Kitty Member since:
2005-10-01

Gnome shell seems to have tackled the problem of notification and tray area from the functional point of view, not just the technical one.

Thus not "xembed is unflexible, a dbus protocol to signal application status would be better if coupled with a new tech for notify icons and menus"

But "why is an application putting an icon in tray? Is it sending a notification the user must interact with? is it a system-wide status signal and control point? What kind of actions would be possible in a transient notification, which should always be accessible? What about urgency level affecting the display?"

From this derived the basic mismatch you can read about in the discussion, with Gnome coders dubious about a pure transmission method that poses no ipothesis on the interaction on the receiving part, whereas they seemed to think the coupling important for the redesign of the whole.
And KDE devs stating that yes, the data was just being sent and totally decoupled from the presentation... so much that there's no indication whatsoever of what the user-facing application can or should do with tha data it receives.
Frankly, both aspects seem worthy of work to me, on different levels, but I'm sure it's a prerogative of the Gnome shell coders to deem that the tech proposal was not solving the UX design problems they were interested in wrt the tray - at least at the moment. And what should have they done with a tech they had no practical interest in, if not let it be until they could contribute with real case requirements?

Reply Score: 2

RE[3]: Duck and Cover
by dneary on Tue 15th Mar 2011 10:41 UTC in reply to "RE[2]: Duck and Cover"
dneary Member since:
2011-03-15

"Dave is saying, in a slightly humorous way, that "Notifications don't use D-Bus" is not a proper problem statement. I completely agree.


Except that wasn't the problem and nobody ever said that it was. The problem was Xembed (which was unflexible and designed for a different era). The solution was to use dbus for ipc (would he rather people reimplement a new ipc just for systrays!?).
"

The problem was "there are too many applications creating icons in the systray/creating custom panel applets, they all behave in slightly different & inconsistent ways, and there is no straightforward way for an application developer to indicate the state of his application across different desktop environments without redoing a bunch of work".

XEmbed is an implementation detail.

Cheers,
Dave.

Reply Score: 1

RE[4]: Duck and Cover
by _txf_ on Tue 15th Mar 2011 11:09 UTC in reply to "RE[3]: Duck and Cover"
_txf_ Member since:
2008-03-17

The problem was "there are too many applications creating icons in the systray/creating custom panel applets, they all behave in slightly different & inconsistent ways, and there is no straightforward way for an application developer to indicate the state of his application across different desktop environments without redoing a bunch of work".

XEmbed is an implementation detail.


Certainly that is the case, it doesn't matter what the old implementation was Xembed etc. the thing was every app decided how their icon was drawn in the systray instead of the shell choosing what was the best representation.

It should be up to the DE to choose how this information is displayed to the user be that using a systray, floating icons, whatever. The spec as I understand it left that completely open enabling people maximum flexibility (and hopefully enable cool stuff in the process) instead of a rigid systray2 approach.

Reply Score: 2

RE[4]: Duck and Cover
by segedunum on Tue 15th Mar 2011 17:30 UTC in reply to "RE[3]: Duck and Cover"
segedunum Member since:
2005-07-06

XEmbed is an implementation detail.

An implementation detail that happens to be very important for users, because XEmbed is widely agreed to have a great deal of things wrong with it.

The question is, what replaces it? Does it get replaced by something that desktops and applications can agree on and communicate through and work with or do we go the traditional Unix and CDE route with a a ton of fragmentation that provides no benefits to anyone?

You know fine well why Aaron specifically mentioned D-Bus in what he wrote. Do we really need to go over why D-Bus was initiated and what benefits common communication between differing applications and desktops brings to users?

I think you're just digging a bigger hole here Dave, and it's sad to see.

Reply Score: 5

RE: Duck and Cover
by Thom_Holwerda on Mon 14th Mar 2011 22:06 UTC in reply to "Duck and Cover"
Thom_Holwerda Member since:
2005-06-29

I've never got this distasteful attitude that seems to exist at the core of Gnome. It certainly doesn't happen everywhere in the project or many of its applications, but it does happen at the core of it.


Whatever it is, fear has been intensifying it. For all its flaws, Ubuntu is the most popular Linux distribution on the desktop, and therefore, one of GNOME's biggest customers - certainly the most visible customer.

And they've lost this customer.

Reply Score: 3

RE[2]: Duck and Cover
by darknexus on Mon 14th Mar 2011 22:23 UTC in reply to "RE: Duck and Cover"
darknexus Member since:
2008-07-15

"I've never got this distasteful attitude that seems to exist at the core of Gnome. It certainly doesn't happen everywhere in the project or many of its applications, but it does happen at the core of it.


Whatever it is, fear has been intensifying it. For all its flaws, Ubuntu is the most popular Linux distribution on the desktop, and therefore, one of GNOME's biggest customers - certainly the most visible customer.

And they've lost this customer.
"

In order to be a customer, they'd have to be paying for something. Given the recent Banshee/amazon MP3 issue, I don't think Canonical are paying GNOME anything, somehow.

Reply Score: 4

RE[2]: Duck and Cover
by allanregistos on Tue 15th Mar 2011 04:46 UTC in reply to "RE: Duck and Cover"
allanregistos Member since:
2011-02-10

"I've never got this distasteful attitude that seems to exist at the core of Gnome. It certainly doesn't happen everywhere in the project or many of its applications, but it does happen at the core of it.


Whatever it is, fear has been intensifying it. For all its flaws, Ubuntu is the most popular Linux distribution on the desktop, and therefore, one of GNOME's biggest customers - certainly the most visible customer.

And they've lost this customer.
"

A big yes. Sad news to Red Hat, but I will never recommend Fedora(I'm using it) to corporate users(those adventurous managers) and even Internet Cafes and home users, but Ubuntu, since it will be the obvious choice which has a long term support and commitment every three years for a desktop. Even The National Bookstore in the Philippines was using Ubuntu, I am quite sure they use it in servers(not necessarily Ubuntu), but the point is the GNOME Desktop, which was the default shell of Ubuntu and thus making GNOME more popular to the public.

I am the one who criticized GNOME Shell's hiding of Power Off/Shutdown button, and this design I think was created because of their short-sightedness and their unwilling to collaborate even to their very own users.

Reply Score: 2

RE: Duck and Cover
by dneary on Tue 15th Mar 2011 10:37 UTC in reply to "Duck and Cover"
dneary Member since:
2011-03-15

I really don't know what else could have been done to get StatusNotifier into Gnome.


If you want to get GNOME to adopt an interface, then talk about what the interface is intended to achieve. Don't ignore the history in the discussions either. Look up Galago, for example, as some background context, and libnotify.

This little gem in Dave Neary's blog tells you all you need to know about how they really feel about collaboration:

"This is not a compelling problem statement. No user ever had a problem because notifications didn’t use D-Bus.

I don't know what you can say to that. D-Bus was initiated many years ago, by a prominent Gnome developer no less, to ensure that apps and desktops could communicate with each and work, thus helping those very same users. KDE embraced and uses D-Bus extensively.
"

Are you willfully and deliberately inferring something I didn't say, or is it accidental?

Look at what I said: no user ever had a problem because notifications didn't use DBus. Allow me to rephrase: No user cares what under-the-covers technology is used to fix the issues he has, or implement features he's interested in.

User problems are of the type: "I want to know when my computer connects to a wifi network" or "I want to know when I have an appointment coming up without opening a calendar application" or "I want to know when I have new email without opening my email client". And I don't care whether that's implemented in the back-end with DBus messages, shared memory, small applets that use inotify to watch mbox files, or whatever. It doesn't matter to me, the user what the desktop environment & application developer do to solve my problem.

Dave.

Reply Score: 1

RE[2]: Duck and Cover
by Thom_Holwerda on Tue 15th Mar 2011 10:47 UTC in reply to "RE: Duck and Cover"
Thom_Holwerda Member since:
2005-06-29

User problems are of the type: "I want to know when my computer connects to a wifi network" or "I want to know when I have an appointment coming up without opening a calendar application" or "I want to know when I have new email without opening my email client". And I don't care whether that's implemented in the back-end with DBus messages, shared memory, small applets that use inotify to watch mbox files, or whatever. It doesn't matter to me, the user what the desktop environment & application developer do to solve my problem.


Not directly, but it does matter indirectly, and that's what you're ignoring. Because of GNOME's my-way-or-the-highway approach to this particular issue, developers now have to go out of their way to support multiple APIs for something as elementary and basic as as this, meaning additional work, additional code, and thus, additional room for bugs. This WILL matter to users, even if they don't know about it or can't put it into words.

Worse yet - it may mean some developers will choose to ignore one implementation, which will also adversely affect users. They may think "screw this" and stick to Xembed, which will also adversely affect users. Especially now that the most popular desktop distribution is going all-out with Unity, you might see developers giving the virtual finger to GNOME, which will - again - adversely affect your users.

This is an element that I've been missing from GNOME's side of the story, and it's the element that actually matters. KDE gets this - interoperability benefits users, even if that means that KDE developers must swallow their pride and use something that could be a bit sub-par or didn't originate from within KDE.

As a user, it looks like to me that GNOME simply can't stand Ubuntu going with Unity - and that's fine. You have the right to be unhappy with this. However, fighting this out in a way that hurts users is bad - and antithetical to the values of Free/open source software. This is behaviour I come to expect from Apple and Microsoft - not from the Free software community.

Edited 2011-03-15 10:51 UTC

Reply Score: 7

RE[3]: Duck and Cover
by allanregistos on Thu 17th Mar 2011 08:43 UTC in reply to "RE[2]: Duck and Cover"
allanregistos Member since:
2011-02-10

[q]
As a user, it looks like to me that GNOME simply can't stand Ubuntu going with Unity - and that's fine. You have the right to be unhappy with this. However, fighting this out in a way that hurts users is bad - and antithetical to the values of Free/open source software. This is behaviour I come to expect from Apple and Microsoft - not from the Free software community.


I agree. During my reading of all the related blogs (except for the witch hunt), my conclusion is not favourable to the GNOME camp. It is sad because I always prefer GNOME over any desktop.

On the positive side, it is helpful for the reason that it awakens me and informed me as a user for this whole collaboration issue and its impact to me as a user. If every DE in the FOSS world collaborated with each other in the past, I may be getting an awesome and innovative desktop today, and it may help me as an application developer to develop application faster using the innovative development tools available as a result.

Reply Score: 1

RE[2]: Duck and Cover
by Morty on Tue 15th Mar 2011 21:08 UTC in reply to "RE: Duck and Cover"
Morty Member since:
2005-07-06

Don't ignore the history in the discussions either. Look up Galago, for example, as some background context, and libnotify.

He, he, don't ignore history indeed. Nice of you to bring up that particular fuck up, as it's another nice example showing off Gnomes cross desktop "collaboration" and illustrate the projects persistent NIH issues. It underline aseigos argument quite nicely, and show it's not a new problem.

Reply Score: 5

RE[3]: Duck and Cover
by Soulbender on Wed 16th Mar 2011 06:34 UTC in reply to "RE[2]: Duck and Cover"
Soulbender Member since:
2005-08-18

Ys, what DID happen to Galago? I remember that from years back but then it just...disappeared.

Reply Score: 2

Coherency?
by molnarcs on Mon 14th Mar 2011 21:44 UTC
molnarcs
Member since:
2005-09-10

Well, I mostly agree with your assessment Thom, except the "more coherent picture" part - this is anything but coherent. What's more, not a single specific point raised by Mark or Aaron has been addressed properly. Instead, it's a who said what back in 2008. They can't get any more evasive than this, boggling down the whole technical aspect of collaboration in minute and irrelevant details. Yeah, GNOME is difficult to understand => nobody understands us properly (and Mark and all critics misunderstand GNOME completely) - what bullshit!

Seigo's critique still stands - rejecting the spec on the basis that no GNOME app uses it is pure tautology: no GNOME app uses it because it is not accepted by GNOME as a spec. I'm not kidding, that was one of the reasons for rejection! These obviously political reasons (read either Mark's or Seigo's blog for the other 3) are the issue at hand, and of course it's kinda difficult to tackle those, so here we are, in who said what land, and of course, oh my, people just don't get it how GNOME works of coure...

Reply Score: 4

RE: Coherency?
by somebody on Mon 14th Mar 2011 22:45 UTC in reply to "Coherency?"
somebody Member since:
2005-07-07

Seigo's critique still stands - rejecting the spec on the basis that no GNOME app uses it is pure tautology: no GNOME app uses it because it is not accepted by GNOME as a spec. I'm not kidding, that was one of the reasons for rejection! These obviously political reasons (read either Mark's or Seigo's blog for the other 3) are the issue at hand, and of course it's kinda difficult to tackle those, so here we are, in who said what land, and of course, oh my, people just don't get it how GNOME works of coure...


good post about external dependencies
http://www.markshuttleworth.com/archives/661#comment-347450
and marks response
http://www.markshuttleworth.com/archives/661#comment-347452

AFAIK, mark is shifting blame on gnome there, while project maintainers were the ones refusing patches, not gnome core. but then again, no one specifies which projects were the ones refusing patches, so i could be wrong with my putting blame on mark. in that case, better question is which projects and why. surely there is bugzilla entry where they refused

Edited 2011-03-14 22:46 UTC

Reply Score: 3

RE[2]: Coherency?
by molnarcs on Tue 15th Mar 2011 05:14 UTC in reply to "RE: Coherency?"
molnarcs Member since:
2005-09-10

Hmm, that's interesting, for a specification or any low level common framework only makes sense if it can be expected to be present on every system. Again, D-BUS comes to mind - D-BUS (and fontconfig, libixml, etc.) would not make any sense if desktops could not expect it on any system they are being installed on. If statusnotifier is to become a spec, then it must be just like that.

Mark seems to mix up external and optional dependencies. Actually, some cross-desktop frameworks are BOTH! HAL comes to mind - if present, DE's can (or could, it's being deprecated) take advantage of it, if not (for example, FreeBSD got HAL much later than Linux), they would fall back on their old mechanisms. Regardless, this is not a really crucial issue, it's more like a distraction (look, Mark is wrong, muhahaha). Not to mention the fact that the very first line on the page the poster links to reads like this (bold is mine):

External Dependencies of GNOME 2.91.x

This page lists the versions of external dependencies that GNOME modules may depend upon, as well as a recommended version of each dependency.


So are external dependencies the absolute minimum requirements that MUST be there on every system, or are they optional?

Reply Score: 2

the only sensible thing
by somebody on Mon 14th Mar 2011 21:58 UTC
somebody
Member since:
2005-07-07

in whole drama is this part

Grow the **** up...

couldn't agree more. if they really need to have pissing and shitting contests, they could at least have them in private. now all the public can see are exposed dicks and not wiped asses.

this nitpicking only makes whole charade to look even worse. there were/are/will be mistakes and rights on all sides, meaning there will be plenty more material to throw around.

my take on problem:
- if gnome is so sure they have the right solution, let them try the public reaction, it is their neck being at risk after all
- if canonical is not happy with gnome and they are so sure in them selves, they should simply fork it and try try the public reaction, it is their neck being at risk after all
- kde should simply wait another cycle (6 months is not long) and see who was accepted from user side and then focus cooperation on them and ignore the other, they have no risk in this game of charade

Edited 2011-03-14 21:58 UTC

Reply Score: 2

Grow the **** up...
by J.R. on Mon 14th Mar 2011 22:56 UTC
J.R.
Member since:
2007-07-25

That seems to be the only solution.

I am loosing faith in the Linux community. Everywhere I turn within the different Linux communities there are enormous egos fighting each other. Personally I think that Gnome looks like the bad guy in this, which is sad. However, pointing to KDE for good example is just plain stupid. Everyone already forgot the "fuck you" attitude of Aaron Seigo during the first KDE4 release (no offense man, you do write good software)? It made me switch to Gnome, and now I don't know where to turn if Gnome don't get their shit together.

I am indeed loosing faith. Forget the egos and politics, and just stick to writing good software!

Reply Score: 2

RE: Grow the **** up...
by SlackerJack on Mon 14th Mar 2011 23:25 UTC in reply to "Grow the **** up... "
SlackerJack Member since:
2005-11-12

Well don't get involved in the politics then because most of the time, facts get lost in the FUD.

Users are actually just as bad, if not worse, slagging off KDE4.0 and GNOME 3.0 even before they have been released.

Reply Score: 3

v RE[2]: Grow the **** up...
by allanregistos on Tue 15th Mar 2011 05:07 UTC in reply to "RE: Grow the **** up... "
RE[3]: Grow the **** up...
by oiaohm on Tue 15th Mar 2011 05:18 UTC in reply to "RE[2]: Grow the **** up... "
oiaohm Member since:
2009-05-30

"Well don't get involved in the politics then because most of the time, facts get lost in the FUD.

Users are actually just as bad, if not worse, slagging off KDE4.0 and GNOME 3.0 even before they have been released.

I for tested KDE 4.0 marked STABLE and it was still alpha quality software at that stage. GNOME 3.0's ridiculous design decision of hiding Shutdown button is a worthy of criticism.
"

That was unfortunate error. KDE 4.0 is what happens when marketing and developers don't understand each other. KDE 4.0 was released as a stable api/abi reference for application developers. Somewhere between lead developers and the marketing sections of the KDE teams the "api/abi reference for application developers" Got lost. Stable api/abi for applications developers does not mean all the applications sitting on top were stable.

Very sorry for you pain. allenregistos. At the time I was unable to correct the media teams incorrect understanding of what the developers said. Its one of those unfortunate errors. KDE media release policy has been changed since then that statements on upcoming releases have to go back past lead developer to check if they have been interpreted correctly. So hopefully that was a one off nightmare.

Yes there is another issue of course. There is no word to mark a clear difference between ABI/API reference release and a end user version. Something that still has to be invented.

Reply Score: 6

RE[4]: Grow the **** up...
by dsmogor on Tue 15th Mar 2011 10:37 UTC in reply to "RE[3]: Grow the **** up... "
dsmogor Member since:
2005-09-01

Quite interesting point. With oss developement communication wide open the PR activities must actually be tightly integrated into the developement process.

Reply Score: 3

RE[3]: Grow the **** up...
by phoenix on Tue 15th Mar 2011 15:40 UTC in reply to "RE[2]: Grow the **** up... "
phoenix Member since:
2005-07-11

Le sigh ;)

KDE 4.0 was never marked stable. It was marked "Developer Preview". 4.1 was the first "stable" release of KDE for non-devs to use.

Must we really go over this every single time someone mentions KDE 4.0??

Reply Score: 4

RE[4]: KDE 4.0
by allanregistos on Thu 17th Mar 2011 08:54 UTC in reply to "RE[3]: Grow the **** up... "
allanregistos Member since:
2011-02-10

Le sigh ;)

KDE 4.0 was never marked stable. It was marked "Developer Preview". 4.1 was the first "stable" release of KDE for non-devs to use.

Must we really go over this every single time someone mentions KDE 4.0??


As an end user, I really do not know that until today. If I recall correctly, I am using Fedora at that time when I first heard of the released and install it because I am so excited with KDE4, only to be disappointed. Anyway, this is an information gap, maybe I have to read more if there are software releases that promises big changes or revolutionary changes.

For the GNOME 3.0's release, I think I have done enough for now, as I am testing it on and off. ;)
Unlike KDE4, GNOME Shell is consistent for being stable even for early releases, except of course for some slow downs, but it is stable. For all this drama, regardless of who is at fault, GNOME developers deserve the respect for making GNOME Shell a possibility.

Regards,
Allan

Reply Score: 1

RE: Grow the **** up...
by _txf_ on Tue 15th Mar 2011 01:43 UTC in reply to "Grow the **** up... "
_txf_ Member since:
2008-03-17

Everyone already forgot the "fuck you" attitude of Aaron Seigo during the first KDE4 release.It made me switch to Gnome, and now I don't know where to turn if Gnome don't get their shit together.

You got annoyed that kde wasn't implementing the features you wanted and you went to *gnome*, that is like cutting off your nose to spite your face.

(no offense man, you do write good software)?

ROFL

Reply Score: 2

RE[2]: Grow the **** up...
by J.R. on Tue 15th Mar 2011 07:04 UTC in reply to "RE: Grow the **** up... "
J.R. Member since:
2007-07-25

" Everyone already forgot the "fuck you" attitude of Aaron Seigo during the first KDE4 release.It made me switch to Gnome, and now I don't know where to turn if Gnome don't get their shit together.

You got annoyed that kde wasn't implementing the features you wanted and you went to *gnome*, that is like cutting off your nose to spite your face.

(no offense man, you do write good software)?

ROFL
"

Well it was not really about the quality of the software (although some might disagree). It did have everything to do with a key developer telling the users to go fuck themselves on his own blog. Basically his words were more in the line of "I contribute, you guys (the users) don't, go fuck yourself", not in those _exact_ words of course, but not far from it. For me, that is a deal breaker no matter how good the software is.

I believe that this is a recurring theme in the Linux community, unfortunately. Remember GAIM/Pidgin a couple of years back? Basically the same story. Make unpopular changes and telling the users to go f themselves. Now I feel Gnome is heading somewhat in the same direction with Gnome-shell and the key developers inability to listen for input from external contributers. They may argue that Canonical is to blame for a lot of things, and they might even be correct, but that does not excuse them from their own fault. They should clean their own house before pointing fingers because they can not possibly believe that all of the criticism is 100% wrong?

The best thing for Gnome would be to admit what could be handled better, and try to improve for later. That would be to "grow the fuck up..."

Reply Score: 2

RE: Grow the **** up...
by Anonymous Penguin on Tue 15th Mar 2011 02:32 UTC in reply to "Grow the **** up... "
Anonymous Penguin Member since:
2005-07-06

I am exactly in the same boat as you, man.
It is a pity that in order to use OS X you have only 2 choiches:
1)Buy their overpriced, often underspecced hardware.
2)Run it in a hackintosh. Apart from technical issues, you are in breach of EULA.

Reply Score: 2

v RE: Grow the **** up...
by allanregistos on Tue 15th Mar 2011 05:04 UTC in reply to "Grow the **** up... "
Ubuntu's direction...
by Jason Bourne on Mon 14th Mar 2011 23:00 UTC
Jason Bourne
Member since:
2007-06-02

I think the next releases of Ubuntu will be challenging. This is going to be the proof of fire Ubuntu will face. Delivering Unity will estrange many users. I feel it already. GNOME Shell will also estrage many users. The scenario will be most of us stuck with GNOME 2.x series to wait and see what happens next. Perhaps then we will see a rethinking from both Canonical and GNOME community and how division made a harm to Linux as a desktop. Not counting the ego wars we've been reading for some time now. Thanks to OSNews, we know this.

Reply Score: 5

webOS
by mweichert on Tue 15th Mar 2011 02:00 UTC
mweichert
Member since:
2006-03-23

With all of this conflict going on, maybe webOS will have it's spot on the desktop afterall. ;)

Reply Score: 1

RE: webOS
by joekiser on Tue 15th Mar 2011 02:23 UTC in reply to "webOS"
joekiser Member since:
2005-06-30

Yeah, Internet Explorer with ActiveX required.

Reply Score: 2

RE[2]: webOS
by searly on Wed 16th Mar 2011 09:30 UTC in reply to "RE: webOS"
searly Member since:
2006-02-27

Huh??? WebOS has nth. to do with Internet Explorer, let alone Windows ...

Reply Score: 1

My take
by nt_jerkface on Tue 15th Mar 2011 02:02 UTC
nt_jerkface
Member since:
2009-08-26

Linux has long been held back by the lack of a single desktop.

I don't think that is the biggest problem but it is in the top 3.

Collaboration among desktops still leaves a lack of a cohesive alternative. In this day and age you can't ask users to leave Windows or OSX along with their software and then explain how they need to pick from 1 of 5 desktops.

Desktop groups don't have a good history of working together so I don't see why anyone would assume that will change.

The Linux desktop is back to hobby status so I think it is better to just let them duke it out. Gnome 3 is a mess, KDE should just ignore it.

Reply Score: 2

RE: My take
by Anonymous Penguin on Tue 15th Mar 2011 02:38 UTC in reply to "My take"
Anonymous Penguin Member since:
2005-07-06



The Linux desktop is back to hobby status so I think it is better to just let them duke it out.


You rightly say "it is back", because I always felt that KDE 3 was very professional and improving all the time.

Reply Score: 3

RE: My take
by oiaohm on Tue 15th Mar 2011 03:36 UTC in reply to "My take"
oiaohm Member since:
2009-05-30

Linux has long been held back by the lack of a single desktop.

I don't think that is the biggest problem but it is in the top 3.

Collaboration among desktops still leaves a lack of a cohesive alternative. In this day and age you can't ask users to leave Windows or OSX along with their software and then explain how they need to pick from 1 of 5 desktops.

Desktop groups don't have a good history of working together so I don't see why anyone would assume that will change.

History you lack it. Most progress on common standards is also the times when the desktops are at each other throats threatening to kill each other.

Basically its been too passive for progress in recent years. This is sign of a possible good time to come.

Reply Score: 1

RE[2]: My take
by nt_jerkface on Tue 15th Mar 2011 16:36 UTC in reply to "RE: My take"
nt_jerkface Member since:
2009-08-26

Whatever you say Yoda.

KDE and GNOME have always been the bestest pals and always collaborate. No one ever complains about KDE and GNOME conflicts. Everything is fine in Linuxland, not a single tank.

Reply Score: 4

RE: My take
by Soulbender on Wed 16th Mar 2011 06:48 UTC in reply to "My take"
Soulbender Member since:
2005-08-18

Linux has long been held back by the lack of a single desktop.

I don't think that is the biggest problem but it is in the top 3.


I dunno about that. Five years ago maybe but not now. I can't really say I notice a big difference between gnome and kde apps when it comes to user interface and basic functionality. I'd say this is true on all mainstream distros.

Collaboration among desktops still leaves a lack of a cohesive alternative. In this day and age you can't ask users to leave Windows or OSX along with their software and then explain how they need to pick from 1 of 5 desktops.


Lets' not underestimate the consumers intelligence. Consumers are perfectly capable of making decisions between a dizzying number of other products like phones, so picking a desktop isn't that much of a stretch,

Gnome 3 is a mess, KDE should just ignore it.


On that we can agree. if Gnome don't want to play ball, f--k em.

Edited 2011-03-16 06:49 UTC

Reply Score: 3

RE[2]: My take
by nt_jerkface on Wed 16th Mar 2011 19:02 UTC in reply to "RE: My take"
nt_jerkface Member since:
2009-08-26

Lets' not underestimate the consumers intelligence. Consumers are perfectly capable of making decisions between a dizzying number of other products like phones, so picking a desktop isn't that much of a stretch


I think the problem is intellectual laziness and not a lack of intelligence.

The average consumer is intimidated by computers and is averse to learning new software. I would also suspect that the majority would prefer to have only one choice when it comes to mobile operating systems. They like a range of colors and sizes but when it comes to software they are resistant to anything new.

Reply Score: 2

Xubuntu
by ozonehole on Tue 15th Mar 2011 02:26 UTC
ozonehole
Member since:
2006-01-07

I wanted to like Unity, but so far I haven't been too pleased with it. The user interface isn't great, nor is it very gast.

Unfortunately I also think that Gnome has been going in the wrong direction for some time. It looks like Gnome3 will be even worse, but I'll reserve judgement until it's released.

I'm sure glad that Xubuntu is out there. Simple and fast, the way Linux should be.

Reply Score: 2

Of course...
by satsujinka on Tue 15th Mar 2011 03:57 UTC
satsujinka
Member since:
2010-03-11

everyone just forgets about projects like XFCE, Enlightenment, and various window managers.

I get that they aren't that popular at the moment, but there's no reason why people can't adapt to them. Especially, if the politics of the larger projects are bothersome.

Though if we aren't in the mood for adapting to smaller projects, perhaps when things cool down people might take a good hard look at what's been going on with the feature bonanza that are DEs.

Personally, I have no clue why anyone would care about notifications. I don't use a system tray and I don't use a window manager that allows programs to alert me to their needs. It isn't their place to demand my attention, I will tend to them when I get to them. Though I don't expect other people to work like this, it's just that it might be helpful if people take a good hard look at the things they take for granted (like a system tray) and see if they really need or want that functionality (I for one don't, it's more bothersome to me than helpful.)

Reply Score: 1

oiaohm
Member since:
2009-05-30

http://www.icedrobot.org/

Yes there is a project to bring android applications to the normal Linux desktop.

Android API might have become the home for closed source applications on Linux at least for a while. Also act like java for other posix platforms. There is one problem MS Windows users you are cut out this loop.

Of course Android has a native binary api as well. So the unified application installer for Linux platforms might now exist. Also KDE plasma widgets also avoid distributions packaging so are the same across distrobutions.

Pressure is building for KDE and Gnome to bypass distributions.

There are interesting battles to come. Distributions not having unified installers is going to cease at some point. Question is willing by distributions or by the side door method will packing be unified. More infighting between KDE and Gnome for progress would be good.

The game is in play. Strap yourself in expect lot more hostilities as all this plays out.

Reply Score: 2

v Comment by stipex
by stipex on Tue 15th Mar 2011 07:36 UTC
RE: Comment by stipex
by ari-free on Tue 15th Mar 2011 08:00 UTC in reply to "Comment by stipex"
ari-free Member since:
2007-01-22

I heard Windows 7 isn't so bad ;)

Reply Score: 3

soap operas
by ari-free on Tue 15th Mar 2011 07:53 UTC
ari-free
Member since:
2007-01-22

If I wanted to watch a soap opera, I could turn to Young and the Restless or General Hospital. I didn't know open source developers liked that sort of thing.

Reply Score: 2

Ubuntu ?
by rafaelnp on Tue 15th Mar 2011 10:16 UTC
rafaelnp
Member since:
2009-06-03

Mark Shuttleworth is a "show man", a blablabla man. And he seems to be, at least in my opnion a kind of dictator. Ubuntu is no democracy (it's widely known), so i think i've made my point.

Reply Score: 2

RE: Ubuntu ?
by Soulbender on Tue 15th Mar 2011 12:13 UTC in reply to "Ubuntu ?"
Soulbender Member since:
2005-08-18

Ubuntu is no democracy


Wow, you mean it's exactly like the kernel?

Reply Score: 3

RE[2]: Ubuntu ?
by nt_jerkface on Wed 16th Mar 2011 19:59 UTC in reply to "RE: Ubuntu ?"
nt_jerkface Member since:
2009-08-26

Though I disagree with Linus on some of his kernel design choices he is still a reasonable guy and openly admits to being a dictator. He takes an honest position which I can respect.

Shuttleworth on the other hand is a Mac fan who would like to turn Linux into a Mac-junior all while talking about the value of his precious community that he usually ignores.

He revealed his true stripes to everyone with that Banshee incident. I don't see why so many still want to give him a break. He hasn't pushed Linux into the mainstream. Ubuntu has just become the de facto Gnome distro for geeks and their grandmas.

Reply Score: 2

RE: Ubuntu ?
by allanregistos on Thu 17th Mar 2011 09:02 UTC in reply to "Ubuntu ?"
allanregistos Member since:
2011-02-10

Mark Shuttleworth is a "show man", a blablabla man. And he seems to be, at least in my opnion a kind of dictator. Ubuntu is no democracy (it's widely known), so i think i've made my point.


Not defending Canonical or Ubuntu, you may have notice that Mark is the one funding the development of Ubuntu, this may change in the future depending on the success of the project. But you have to expect a different governing process of Ubuntu because of this, rather than expecting Ubuntu to behave like Debian. In that case, you may use Debian and avoid Ubuntu.

The founder may have weight on any decisions being made, but that should not hindered or compromise the ideals of free software. As of this moment, I think Mark doesn't violate any of this as far as FOSS is concerned. Developing in private and releasing it in public for acceptance as open source software is not in any violation of free software licenses available, I believe.

Reply Score: 1

Clarifying rules for external dependencies
by dneary on Tue 15th Mar 2011 10:48 UTC
dneary
Member since:
2011-03-15

From http://www.markshuttleworth.com/archives/661#comment-347450 :

In the interests of full disclosure, I sent a couple of emails to Mark explaining what I understood was GNOME’s policy towards dependencies. To ensure accuracy, I then spoke to 2 members of the release team, who both confirmed the policy for me. One of the members said some pretty strong things in his comments, which I tempered (without modifying the sense) in my email to Mark.

Here’s an extract from the first email I sent to Mark, where I describe my understanding of external dependencies:

> Mark Shuttleworth wrote:
> > It’s difficult to know what external dependency processes are for: some
> > say they are to bless existing decisions, others say they are a
> > requirement for those decisions to be taken.
>
> I’m writing a follow-up blog entry and I hope that I can clarify that
> for you. There seems to be no such confusion for the release team (at
> least, in the 2.x era): an external dependency is a non-GNOME module
> which is a dependency of a package contained in one of the GNOME module
> sets. And since libappindicator does not fit that definition, there is
> quite simply no need for it to be an external dependency. I can point
> you to 3 or 4 precedents, if you’d like.

Here’s the full email I sent to Mark after speaking to release team members, from which he cites above:

> I got you what is, as far as I can tell, a definitive answer on this.
> First, extracts from the release team policies:
>
> ** From http://live.gnome.org/ReleasePlanning/ModuleRequirements
>
> “Do not add any external dependencies other than those already approved
> for that cycle (e.g.
> http://live.gnome.org/TwoPointSeventeen/ExternalDependencies). This
> includes not depending on a newer version than what is already approved.”
>
> ** From http://live.gnome.org/ReleasePlanning/ModuleProposing
>
> # I need a new dependency. What should I do?
> * New dependencies for features should be added as soon as possible.
> There are three possibilities for dependencies: make them optional,
> bless them as external or include them in one of our suites. New
> dependencies should be known before feature freezes. A dependency can be
> proposed for inclusion AFTER the 2.27.1 release because it might need
> more time to be ready.
>
> # How to propose an external dependency?
> * If you want to add a new dependency, make a good case for it on
> desktop-devel-list (this may only require a few sentences). In
> particular, explain any impact (compile and run time) on other modules,
> and list any additional external dependencies it would pull in as well
> as any requirements on newer versions of existing external dependencies.
> Be prepared for others to take a few days to test it (in particular, to
> ensure it builds) before giving a thumbs up or down.
>
>
>
> Now, in practice:
> 1. If a maintainer wants to add optional (compile-time) support for a
> new feature that uses a library, there is nothing they have to do beyond
> commit the patch, and let the release team know.
> 2. If a maintainer wants to add unconditional support for a feature
> which requires a new dependency, then they should first write the patch,
> then propose the dependency for inclusion in the next release.
>
> Traditionally, the bar for external dependencies has been low, modulus a
> number of conditions. There is reason to believe that the bar for
> libappindicator would be higher, because of the history involved. One or
> more maintainers arguing for the functionality would help.
>
> I have talked to 2 release team members specifically about
> libappindicator, and have been told by one that:
> * Since libappindicator has a CLA, it can’t be included in the GNOME
> module sets under current policy
> * It could be included as an external dependency, but would meet some
> opposition because of duplicate functionality with libnotify
>
> and by the other that:
> * libappindicator doesn’t make sense as a GNOME dependency because it is
> only useful with Unity, which is not part of GNOME
> * adding appindicator support will only make apps better on one distro,
> and don’t benefit GNOME as a whole
> * If people want to make their app integrate with Unity they’re free to
> do so, but they should add a configure option so the release team
> doesn’t have to worry about it
> * For core GNOME components, providing deep integration with other
> desktops is probably a non-starter
>
> This is of course all personal opinion on the part of the 2 people I
> spoke to.
>
> In short, it’s an unnecessarily emotional issue which has been
> aggravated by all concerned. But if module maintainers want to support
> libappindicator, then they are able to do so. And if you can persuade
> the shell authors to use appindicators in the same way as Unity, then
> there would be nothing apart from copyright assignment preventing
> libappindicator being part of the GNOME platform.

Hopefully it’s clear that Mark’s reading of my email is selective at least. There is no disagreement between the two release team members I talked to, the policies for dependencies are clear & unambiguous, and as others have said, there is no need to do anything if proposing optional compile-time support for a new dependency.

The relevant release team guidelines I quoted are also consistent with the position the release team took for libappindicator.

In fact, the release team adopted almost exactly the same position for libnotify when it was first proposed for inclusion, in 2.20: http://www.0d.be/2011/03/13/libnotify%20adoption/

Cheers,
Dave.

Reply Score: 1

Thom_Holwerda Member since:
2005-06-29

Thanks Dave, I added a link to your comment in the article.

Reply Score: 1

sorpigal Member since:
2005-11-02

Can I get a clarification?

Suppose that the appindicator devs evangelized it to GNOME developers and got a bunch of them to add support to their apps for the next release. After that release it could be said that things in GNOME use it, so would a proposal to include it as an external dependency then be appropriate?

It seems like the process is to get it used first, then to propose it as part of the platform. Am I right about this?

Reply Score: 3

phoenix Member since:
2005-07-11

* For core GNOME components, providing deep integration with other


Wow. Just. Wow. That line right there sums up the entire issue with GNOME.

Other DEs will bend over backward in order to make things work nicely with GNOME. And what does GNOME want to do make things work nicely with other DEs?

Absolutely nothing.

And until that attitude changes, there's really nothing to discuss.

Just, wow. [shakes head]

Reply Score: 6

phoenix Member since:
2005-07-11

Hrm, seems I chopped off the quote:

* For core GNOME components, providing deep integration with other desktops is probably a non-starter

Reply Score: 2

Linux continues to lose its relevance
by morglum666 on Tue 15th Mar 2011 12:15 UTC
morglum666
Member since:
2005-07-06

For all the chatter about the "bazaar", the "Cathedral", "Many eyes make bugs cry etc" linux continues to lose its relevance.

In an era when you have the top tablet maker talking "Post PC" - linux didn't even reach that level of recognition in the first place.

The shift to web apps only makes the problem worse as the linux desktop solves a problem that Microsoft solved a long time ago, and it does it in a more painful fashion.

Developers: Instead of working on this non winner, go build a web app that people will actually use and value.

Morglum

Reply Score: 3

_txf_ Member since:
2008-03-17

In an era when you have the top tablet maker talking "Post PC" - linux didn't even reach that level of recognition in the first place.


Ever heard of Android?

The shift to web apps only makes the problem worse as the linux desktop solves a problem that Microsoft solved a long time ago, and it does it in a more painful fashion.


What do you mean a long time ago. *cough* IE6,7,8 are all a piece of sh**. Only with the release of IE9 will they catch up...

Also if you're so into web apps Chrome is built on top of (you guessed it) *Linux*...

Also I would like to point out that not everybody enjoys web development.

Edited 2011-03-15 12:29 UTC

Reply Score: 4

oiaohm Member since:
2009-05-30

For all the chatter about the "bazaar", the "Cathedral", "Many eyes make bugs cry etc" linux continues to lose its relevance.

How android webos splashtop are just different forms of Linux. So Linux is gaining relevance. Ok some distributions might lose there relevance and disappear has this happened before yes. Should we expect this to happen in future yes.

In an era when you have the top tablet maker talking "Post PC" - linux didn't even reach that level of recognition in the first place.

Old business game change the market place. Linux is working perfectly change the market to suit it.

The shift to web apps only makes the problem worse as the linux desktop solves a problem that Microsoft solved a long time ago, and it does it in a more painful fashion.

Really MS did not solve it. Web apps is just another option.

Developers: Instead of working on this non winner, go build a web app that people will actually use and value.

Servers to support web apps are still required. Please don't forget this. Web apps still play into Linux strong points. This is all about changing the market to suit where Linux is strong and where Windows is weak.

Reply Score: 1

bfr99 Member since:
2007-03-15

In a market dominated by user interfaces from Apple, Android, and Html5/Javascript, spats between Gnome and Canonical are as relevant as the Beta/VCR debates.

Reply Score: 1

nt_jerkface Member since:
2009-08-26

How android webos splashtop are just different forms of Linux. So Linux is gaining relevance. Ok some distributions might lose there relevance and disappear has this happened before yes. Should we expect this to happen in future yes.


He's clearly talking about traditional distributions and desktops.

Yes Linux powers Android, but does that really matter? What type of relevance is Linux gaining? Linux also powers my blu-ray player, does that confer benefits to some greater movement?

The Linux old-guard doesn't want to admit that their stalwarts (RMS, ESR) were wrong in some of their key assumptions. It's like a religion that refuses to accept reform and would rather die than admit that some of the founders were not actually prophets.

The open source holy war has been a giant mistake. MS and Apple would have more competition if the Linux movement had not been so hostile in its attitude towards proprietary software. That rigid open source ideology has merely served the interest of the largest proprietary companies. Linux needed parters of all types early on but the movement was arrogant and assumed hobby coders could do all the work.

As I said before the Linux desktop is back to being a hobby so this infighting is merely geek entertainment. Whether it has 5 or 500 desktops doesn't matter at this point.

Reply Score: 1

akula83 Member since:
2009-11-17


The Linux old-guard doesn't want to admit that their stalwarts (RMS, ESR) were wrong in some of their key assumptions. It's like a religion that refuses to accept reform and would rather die than admit that some of the founders were not actually prophets.

The open source holy war has been a giant mistake. MS and Apple would have more competition if the Linux movement had not been so hostile in its attitude towards proprietary software. That rigid open source ideology has merely served the interest of the largest proprietary companies. Linux needed parters of all types early on but the movement was arrogant and assumed hobby coders could do all the work.

As I said before the Linux desktop is back to being a hobby so this infighting is merely geek entertainment. Whether it has 5 or 500 desktops doesn't matter at this point.


This is a completely valid opinion.... the only issue is it was RMS, ESR and other old-guard stalwarts who built the foundations we are building on today. They provided us with in infrastructure either directly, or by marshalling people to the cause.

The cause of "Free Software / Open Source Software" is what has created large portions of the modern infrastructure. This infrastructure is intertwined with a certain political (some would say religious) leanings believing that software should be "free".

The thing a lot of people don't get is that large swathes of the linux community care more about "Openness" then adoption. They would prefer the software be only used by three guys in their basements then to loose their freedom.

Other projects have more pragmatic approaches such as the BSDs. Distributions could base themselves on freebsd and offer a completely 100% stable ABI / API and what have you. Licensing would not be an issue, however no one has done this. There seems to be more value is being a Linux Distribution then a BSD distribution. I believe that this is because the value of the existing investments by the "Old-guard".

Each part of the ecosystem is owned by its contributors, many of which don't want the ecosystem to change, and it is their right to do what they like with their code.

If you think they aren't doing something advisable, you can tell them. However this community has already decided they either value the ideal of free software OR the contributions done by people who follow the idea.

At the end of the day there are always more sandboxes to play in (OSX, Windows, *BSD, Haiku, etc).

Jason

Reply Score: 2

oiaohm Member since:
2009-05-30

"How android webos splashtop are just different forms of Linux. So Linux is gaining relevance. Ok some distributions might lose there relevance and disappear has this happened before yes. Should we expect this to happen in future yes.


He's clearly talking about traditional distributions and desktops.
"

And the problem is why does Linux have to compete against MS using a traditional desktop model. Nothing says it has to. Android supprise suprise with the work on tablets is adding interface shape a lot like traditional desktops.

Simple fact nt_jerkface get it. Linux is free to change the ball park. And the progress android makes with the project to allow android applications on linux and os x. Will make android progress go back to the desktop as well.

The idea they are different things is a problem. All major distribution today started off as a conner case performing a particular task bar Ubuntu and moved to desktop. History of distribution cycles don't change. Ubuntu over took its past form now Android the new kid on block is lining up to take out Ubuntu on desktop. After android something else will line up to take it out. Welcome the field of sharks the distribution world is.

Its always Linux has lost relevance when in fact at the moment we could be heading into another distribution blood bath. Where distrobutions who have not been progressing die. Just because relevance is moving between distributions does not mean Linux is losing relevance. It means something interesting is about to happen.

Reply Score: 3

The hardest thing to kill is a good myth.
by oiaohm on Thu 17th Mar 2011 08:01 UTC
oiaohm
Member since:
2009-05-30

Canonical really has no rights to throw stones up stream.

Its really simple to allow distrobutions to get away with murder and blame the upstream.

nt_facejerk really was not to know that the topic area he took me on in I have taken on one of the lead developers of Reactos and Some of the most important reverses of NT tech it over before. Its a topic area I have never been defeated in and I know extremely well.

Linux Kernel Project technically is causing no issues. Issues are being introduced downstream of it. If you take a stock kernel source from kernel.org and build it no patching try to run ubuntu with default configuration it don't run very well.

Fedora Redhat SUSE CRUX Debian Arch. All that I have tested personally can operate on a kernel source taken straight from kernel.org without alteration.

What is the most common distributions tested. Ubuntu related using ubuntu patched kernels. This of placing the distribution on a pure stock kernel and see what happens is a good test of what odds you have that closed source drivers will work perfectly. The worse the distribution performs the less likely closed source driver will work. This is pure tampering related.

Next is what is called dependency hell. Distrobution systems have a known flaw. This flaw prevents you from taking binaries from where ever. Only 1 version of a .so(equal to a windows .dll) can be installed at the same time in most cases.

Its really lack of multi version install supported by package managers and the dynamic loader distributions use that cause massive amounts of incompatibility.

Windows uses two items SXS and compatibility shims. Both in fact user space not kernel space to give Windows magical backwards compatibility support for applications. Is there any technology limit provided by the Linux kernel preventing this. Answer No.

These issues of massive incompatibility causes from this also exists on FreeBSD NetBSD and Solaris and most other Unix based systems. So its not a unique Linux defect.

One of Linux biggest historic mistakes was coping how Unix systems did their dynamic libraries. Debian using the freebsd kernel shows just as much trouble as Debian using Linux kernel. This is one of the first things that clued me up that the kernel argument could be complete crap.

Like another case of only providing one. Why does ubuntu have to provide only 1 version of x.org server? When they could have provided 1 versions for open source drivers and 1 version for closed source so avoiding failures. Applications using X11 protocol really would not have cared. The kernel would not have cared. But no diskspace to fit the most flashy features is more important than not hurting the user.

Ubuntu claims user-friendly but a lot of what is does is the worst kind of user-friendly being just a baited trap waiting to bite you.

Ubuntu claim of user-friendly need to be taken as we will be flashy and we don't give a stuff if your computer crashs or does other bad things to you.

Of course Ubuntu is not the only one that needs to be picked on for providing second rate packaging systems and dynamic loaders for modern day requirements. Fixing these things are not prity.

While people are getting the problem wrong pressure is not applied to where the problem is so the problem never gets fixed.

Reply Score: 1