Linked by Jordan Spencer Cunningham on Mon 11th Jan 2010 15:57 UTC
Original OSNews Interviews A few weeks ago, we asked for the OSNews community to help with some questions we were going to ask Aaron Griffin from the Arch Linux team, and the response was glorious and somewhat phenomenal. We added those questions to our own and sent them on over, and then we were surprised by receiving not only Aaron Griffin's responses but answers from various individuals from the team.
Thread beginning with comment 403745
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[4]: Ugh
by nt_jerkface on Tue 12th Jan 2010 18:57 UTC in reply to "RE[3]: Ugh"
nt_jerkface
Member since:
2009-08-26


But then you lose the whole point of different distros.


I said at least provide a standard base that distros follow. Anyways most of the distros are completely pointless. It makes more sense to have an OS that is modular in design that can modified for a variety of purposes while maintaining binary compatibility.


You can distribute Linux binaries too - so in that respect, Linux isn't much different to Windows.

You're being disingenuous. Linux is much different to Windows AND OSX in that regard since you can't build a single GUI executable and expect it to to work across all distros for a reasonable amount of time. The Linux ecosystem is designed with the assumption that user software is open source. If the goal is adoption by the public then it doesn't make sense to design the system completely around open source.

The binary compatibility across Linux distros that exists is for small command line programs, and even then it is limited since the distros can't even agree on basics like where user programs and settings should be stored.

That makes little sense. A repository /IS/ a shared library system.

In the Linux sense of the word. By general definition a repository is storage system. You can store safe executables for the user to download. There is no reason why this must be a feature exclusive to shared library systems.


Now your talking about a completely different topics.
(plus repositories / package managers SOLVE dependancies issues which often break systems rather than causing them as you suggest).

No I'm not, it's all a part of the same problematic software distribution system. Package managers attempt to resolve dependencies but applications still get broken by updates.

Here's the genius shared library system at work:
Skype broken after KDE update:
http://fedoraforum.org/forum/showthread.php?t=233354


The command line dependancy has nothing to do software repositries what-so-ever!! (and more importantly, 99% of the time you don't need to touch the command line -

I said that going to the command line is typically needed to fix dependency breaks.

Here's an example:
http://itechlog.com/linux/2008/12/18/fix-broken-package-ubuntu/


Most linux distros give you the CHOICE of using a command line or a GUI. You DONT have to use the command line, but sometimes it's just easier to explain on a forum than trying to navigate someone around various windows and menus.


Explain how the last example problem could have been fixed with the GUI.


So what you're suggesting is to replace one software repository with another!?

One that makes more sense.


Plus you're still missing the point that sometimes packages need to be tailored specifically to that distro.

I'm missing the point even though I already went over this? How long did you spend reading my response? 10 seconds?

The tailoring wouldn't be needed if the distros had a common library base and directory structure.

There are other options including a standard common language interface, binary compatibility layer or even a VM solution. But shutting your brain off and defending the status quo is probably the worst option.


Software repositories have nothing to do with disk space savings!

The shared library system was designed in a completely different era when saving hard drive space was a priority. That is no longer an issue and now the remaining benefits can be adopted within an independent system where applications can have their own libraries that can't be broken by a system update.

Trying reading my response more carefully next time instead of just skimming it and providing a knee-jerk response. It isn't a Windows vs Linux issue. It's a software engineering issue. Apple's engineers decided to ditch the shared library system so maybe you should at least question as to why.

Edited 2010-01-12 19:00 UTC

Reply Parent Score: 2

RE[5]: Ugh
by Laurence on Tue 12th Jan 2010 20:40 in reply to "RE[4]: Ugh"
Laurence Member since:
2007-03-26

I said at least provide a standard base that distros follow. Anyways most of the distros are completely pointless. It makes more sense to have an OS that is modular in design that can modified for a variety of purposes while maintaining binary compatibility.

Err, Linux IS modular in design and can be modified for a variety of purposes while maintaining binary compatibility.

You're being disingenuous. Linux is much different to Windows AND OSX in that regard since you can't build a single GUI executable and expect it to to work across all distros for a reasonable amount of time.

You can. I've already stated that. Stop trying to spread BS.
The problem with Linux (if you can call it that) is that it's a rolling release - so where as in Windows, you have a major release every 3 to 5 years (on average), you have lots of minor releases in Linux.
Sometimes these minor releases will break things. But then I've had service packs break Windows too - let alone whole OS upgrades break apps.

So yes, Linux binaries won't work indefinitly - but then neither will Windows binaries.


The Linux ecosystem is designed with the assumption that user software is open source. If the goal is adoption by the public then it doesn't make sense to design the system completely around open source.

Again that's absolute BS. It makes no difference whether the source is open or not.
Plus ArchLinux and all the big user-centric distros push binaries out via their repositories. So the users never need know the source code was optionally downloadable.


The binary compatibility across Linux distros that exists is for small command line programs, and even then it is limited since the distros can't even agree on basics like where user programs and settings should be stored.

Again that's completely rubbish.
You do realise that there's plenty of large closed source apps available for Linux? VirtualBox (not the OSE but the more feature-rich edition) is closed AND has a GUI. And given the complexity of virtualisation, I'd hardly define that as a small command line program.

In the Linux sense of the word. By general definition a repository is storage system. You can store safe executables for the user to download. There is no reason why this must be a feature exclusive to shared library systems.

Right, I get you.


No I'm not, it's all a part of the same problematic software distribution system. Package managers attempt to resolve dependencies but applications still get broken by updates.

You still don't get it. The package managers /DO/ resolve the issue. Sure, there's occations when things still go tits up. But then that's the case with EVERY OS.
Operating systems are infinity complex - so sh*t happens.

However, try and manually resolve dependancies in Linux (rather than using the "problematic software distribution") and I bet you'd instantly run into troubles.

So trust me when I say that package managers have made life a HELL OF A LOT easier on Linux.

I said that going to the command line is typically needed to fix dependency breaks.

Here's an example:
http://itechlog.com/linux/2008/12/18/fix-broken-package-ubuntu/

That link has nothing to do with your arguement (it's details on how to fix a package that corrupted on install and nothing to do with dependancies).


The tailoring wouldn't be needed if the distros had a common library base and directory structure.

But for the most part they DO (and those that don't, don't because of very specific reasons and usually the same reasons why they forked to start with)

Personally I like the fact that there's lots of different distros. Sure it complicates things, but at least I get to run the system I want without compromise.


The shared library system was designed in a completely different era when saving hard drive space was a priority. That is no longer an issue and now the remaining benefits can be adopted within an independent system where applications can have their own libraries that can't be broken by a system update.


While I get what you're driving at - this is never an issue for the home users as package managers are bloody good these days. So I still think you're massively overstating the problem.
Sure, the devs at ArchLinux (and other distro devs) might get fed up from time to time.
However they're the ones in the position to make the change (as bad as it sounds - it's not my problem, it's theres. So I'll invest my spare time developing solutions to problems I encounter)

Trying reading my response more carefully next time instead of just skimming it and providing a knee-jerk response. It isn't a Windows vs Linux issue. It's a software engineering issue.

Your initial post used Windows as a comparison and it's just continued from there. ;)

Reply Parent Score: 2

RE[6]: Ugh
by nt_jerkface on Wed 13th Jan 2010 01:04 in reply to "RE[5]: Ugh"
nt_jerkface Member since:
2009-08-26


Err, Linux IS modular in design and can be modified for a variety of purposes while maintaining binary compatibility.

That's why I said OS, as in a full operating system, not a kernel. The problem is that there isn't binary compatibility across distros that use the Linux kernel.

So yes, Linux binaries won't work indefinitly - but then neither will Windows binaries.

No one expects Windows binaries to work indefinitely. However you can expect them to work for the life of the operating system. Both Windows and OSX see the value in offering developers a stable platform. With Linux you can't even expect them to work between minor updates.


It makes no difference whether the source is open or not.

I was talking about user software. The software distribution systems are all designed around open source. You run into massive headaches when you work outside that system. Not just through distribution but because the distro clusterfu*ck is dealt with by releasing the source and having the package managers downstream account for the differences.


You do realise that there's plenty of large closed source apps available for Linux? VirtualBox (not the OSE but the more feature-rich edition) is closed AND has a GUI. And given the complexity of virtualisation, I'd hardly define that as a small command line program.

There are closed source apps available for Linux but the companies that produce them still have to account for all the differences. Companies that release a single tar file are hiding all the "poke in the dark" scripts that have to be built to deal with all the distros. Even if you release for a couple distros you still end up building multiple binaries.

Opera's Linux section shows what supporting multiple distros really looks like. Note that some distros have multiple packages for differing versions.
http://www.opera.com/download/index.dml?platform=linux



As for VirtualBox it is open source while VMWare is closed source. VMWare has in fact been broken multiple times by updates.

http://www.netritious.com/virtualization/fix-vmware-after-ubuntu-up...

Reply Parent Score: 2