Linked by Thom Holwerda on Thu 5th Nov 2009 23:05 UTC
Linux As we all know, Mac OS X has support for what is called 'fat binaries'. These are binaries that can carry code for for instance multiple architectures - in the case of the Mac, PowerPC and x86. Ryan Gordon was working on an implementation of fat binaries for Linux - but due to the conduct of the Linux maintainers, Gordon has halted the effort.
Thread beginning with comment 393182
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[4]: Always On the Cards
by Slambert666 on Fri 6th Nov 2009 07:09 UTC in reply to "RE[3]: Always On the Cards"
Slambert666
Member since:
2008-10-30

Going back to the subject, how would fat binary solve any of the above problems? If anyone really wants single binary for different archs for whatever reason, there's always "#!" and if you play smart with it, you can pack everything into a single file and unpack and execute the right binary on demand.


Sure you can brew together all kinds of custom solutions that could / should / would maybe work, so now you have just doubled the number of problem areas:

1. The program itself.
2. The install system for the above program.

It is so sad that today it is much, much easier to make a new distro (an opensuse or ubuntu respin) that contains your program as an addition, than it is to make an installer that will work for even a single distro across versions ....

A fat binary is not a complete solution, it is not even a partial solution but it is perhaps a beginning to a solution.

The fat/universial binary thing is just some weird thing apple did. Maybe they didn't have common scripting mechanism across old and new systems. Maybe it just fitted better with their multi data stream file system. There's no reason to do it the same way when about the same thing can be achieved in much simpler way and it's not even clear whether it's a good idea to begin with.


Maybe they just wanted the install process to be easier for their users .... and ISV's ...

Reply Parent Score: 1

RE[5]: Always On the Cards
by asdf on Fri 6th Nov 2009 08:06 in reply to "RE[4]: Always On the Cards"
asdf Member since:
2009-09-23

Sure you can brew together all kinds of custom solutions that could / should / would maybe work, so now you have just doubled the number of problem areas:

1. The program itself.
2. The install system for the above program.


Modifying the binary format, updating kernel and then all the tool chains, debuggers and system utilities are gonna be much harder than #2.

It is so sad that today it is much, much easier to make a new distro (an opensuse or ubuntu respin) that contains your program as an addition, than it is to make an installer that will work for even a single distro across versions ....


Sure, agreed. Linux is facing different problems compared to proprietary environments.

A fat binary is not a complete solution, it is not even a partial solution but it is perhaps a beginning to a solution.


But here I completely fail to see the connection. Really, fat binary support in kernel or system layers doesn't make whole lot of difference for the end users and we're talking about the end user experience, right?

The real problem is not in the frigging binary file format, it's in differences in libraries, configurations and dependencies and fat binary contributes nothing to solving that. Also, it's a solution with limited scalability. It sure works fine for apple but it will be extremely painful to use with increasing number of binaries to support.

Reply Parent Score: 3

RE[5]: Always On the Cards
by smitty on Fri 6th Nov 2009 08:07 in reply to "RE[4]: Always On the Cards"
smitty Member since:
2005-10-13

If 3rd parties don't know how to make a good cross-distro package, it's either their own fault or a limitation of Linux that FatELF would not solve.

A good post from another site:

There seem to be two main reasons to have fatElf as advocated by its proponents:
1. distributing binaries for different architectures bundled together
2. distributing an executable and the libraries it depends on in a single package.

Both of these can be achieved today by simply putting all the binaries and all the libraries to one directory tree plus a simple script to choose the right version.

I hear cries about how difficult it is to deploy on linux because of all the distro incompatibilities. Well, lets take a look at a concrete example, such as WorldOfGoo.
It is distributed as a deb, rpm and a tar.gz. This is the relevant content of the tar.gz:

d icons
d libs32 \_ contains libogg, libvorbis, libSDL
d libs64 /
d res - game data
f readme.html
f WorldOfGoo
f WorldOfGoo.bin32
f WorldOfGoo.bin64

The WorldOfGoo shell script basically only does:

Code:

MACHINE=`uname -m`
if [ "$MACHINE" = x86_64 ]
then
LIBS=./libs64
BIN=./WorldOfGoo.bin64
else
LIBS=./libs32
BIN=./WorldOfGoo.bin32
fi

export LD_LIBRARY_PATH=$LIBS:"$LD_LIBRARY_PATH"
$BIN $@

The rpm and deb differs from the tarball only in that they put the above in /opt/WorldOfGoo and install some shortcuts and icons.

You cant say it takes a lot of effort to 'maintain' that?

Sure a shell script, however simple and posix-compatible it tries to be, is somewhat error-prone (its easy to use some non-standard-compliant features and make assumptions). Thats why I would rather see a project that factors out the operations usually done in such scripts (other common operation ive seen is locating certain directories) and provides them in a standardized way (some kind of an extended symlink framework maybe). This certainly doesn't look like a kernel problem at all.

Somebody asked why isnt it done? Maybe because status quo works as good as it does!


Edited 2009-11-06 08:11 UTC

Reply Parent Score: 6

RE[6]: Always On the Cards
by segedunum on Fri 6th Nov 2009 13:23 in reply to "RE[5]: Always On the Cards"
segedunum Member since:
2005-07-06

Because that's a crap hodge-podge 'solution' for the fact that Linux distributions have no way to handle the installation of third-party software. There's no standardised way for any distribution to acually handle that or know what's installed, which is extremely essential when you are dealing with third-party software. Conflicts can and do happen and the fact that anyone is having to do this is so amateurish it isn't believable.

It doesn't handle ABI versions, doesn't handle slightly different architectures like i686 that could handle the binaries..... It's such a stupid thing to suggest ISVs do it's just not funny.

It's the sort of 'solution' that makes the majority of ISVs think that packaging for Linux distributions is just like the wild west. Telling those ISVs that they are silly and wrong is also stupid and amateurish in the extreme but still people believe they are going to change things by saying it.

Edited 2009-11-06 13:39 UTC

Reply Parent Score: 2

RE[6]: Always On the Cards
by MrWeeble on Fri 6th Nov 2009 14:56 in reply to "RE[5]: Always On the Cards"
MrWeeble Member since:
2007-04-18

Here is a potential maintenance problem with that script

Lets say that AMD create a new architecture in a couple of years and let's call it x86_86_2. x86_86_2 is to x86_86, what i686 is to i386, so it it perfectly capable of running legacy x86_86 code, it just runs a little better if given x86_86_2 code.

Now let's install World of Goo on this machine, it may be a few years old by this point, but it is a very cool and addictive game so we still want it.

The script checks the machine architecture. Is it "x86_64"? No. Therefore it runs using the 32 bit architecture. Sure it will work, but it should be running the 64 bit version not the 32-bit version.

Now what if it was compiled as a single set of FatElf binaries

WorldOfGoo.bin gets run, The OS "knows" that it is running x86_64_2 and looks at the index to see if there is any x86_64_2 code in the binary; there isn't, but it also "knows" that x86_64 is the second best type of code to run (followed probably by i686, i586, i486, i386). It finds the x86_64 binary and executes it. That binary looks in ./libs for the library files and for each one, performs the same little check.

Sure it will take a few milliseconds extra, but it will run the right version of code on future, unknown platforms

To my mind, FatElf is an elegant and simple solution to this sort of problem

Reply Parent Score: 2

RE[5]: Always On the Cards
by sbenitezb on Fri 6th Nov 2009 09:18 in reply to "RE[4]: Always On the Cards"
sbenitezb Member since:
2005-07-22

A fat binary is not a complete solution, it is not even a partial solution but it is perhaps a beginning to a solution.


Complete nonsense. You still need to compile all that stuff you put in the binary, how is that gonna help you? The only practical solution is static linking, which is what most should really do. For example, you can download and install Opera qt4 statically compiled and it will work regardless of distribution. The same with skype, and some other closed software. So it's not impossible to do it, and yes, you need testing.

Reply Parent Score: 2

setec_astronomy Member since:
2007-11-17

Spot on.

I would like to add that in this discussion, two separate problem fields got mixed up.

The first on is concerned with providing an - ideally ? - unified binary for a range of "hardware architectures", e.g. the ARM notebook / scrabble game example from the comment above comes to mind.
As others have already pointed out, it is difficult to sell the advantages of a "fatELF" while all the costs and problems inherent with such a solution could be avoided if you solve the problem where it actually occurs, namely at the point of software distribution.
If you download, for example, Acrobat reader, the script operating the download form tries to guess the right system parameters and automatically provides the "correct" / "most likely" binary to match your needs. Additionally, advanced users can manually select the right binary for their environment from a - given the number of Linux distros out there - surprisingly small number of binaries and be done with it.

This, plus the possibility to choose an approach similar to the one used by the world of goo developers are less invasive to the target system and are, if done well, more convenient for the end user.

The second, somehow orthogonal, problem set is the large level of dispersion when it comes to what actually is an Linux based operating system (e.g. what is sometimes referred to as "distro" or "dependency hell"). It is imho crucial to get rid of the idea that from the point of view of an ISV, there is something like an "operating system". What the application developer relies on is a set of "sandboxes" that provide the necessary interface for the particular program to run.

In the case of MS Windows or Mac OSX, there is - in essence - a rather small number of "blessed" sandboxes provided by the vendor that allow ISV's to easily target these operating systems. The reason for the small number of these sandboxes is imho related to the fact that there is, essentially, only one vendor deciding what is proper and what is not, e.g. it's more a cultural difference and less a technical one. Failing to address the fact that - for example - Linux based operating systems live and operate in an environment with multiple vendors of comparable influence and importance and multiple, equally valid and reasonable answers to the question of "which components 'exactly' should go into our operating systems and how do we combine them" misses the mark a little bit. Diversity between the different distros is not some kind of nasty bug that crept into our ecosystem, it is a valid and working answer to the large number of different needs and usage patterns of end users. Moreover, it is kind of inevitable to face this problem sooner or later in an where players rely on compability at the source code level and the (legal and technical) ability to mix and mash different code bases to get their job done.

If an ISV is unwilling or unable to participate in this ecosystem by providing source-level compability (e.g. open sourcing the necessary parts that are needed to interface the surrounding environment), they have a number of options at their hands:

- Target a cross-platform toolkit like for example Qt4, thus replacing a large number of small sandboxes with one rather large one, which is either provided by the operating system (e.g. distro) or statically linked.

- Use a JIT compiled / interpreted / bytecoded run-time environment, again abstracting large numbers of sandboxes into the "interpreter" and again rely on the operating system to provide a compatible rte or additionally ship your own for a number of interesting platforms.

- Use something like for example libwine

- Rely on a reasonable and conservative base set of sandboxes (e.g. think LSB) and carry the remainder of your dependencies in the form of statically linked libraries with you. There are problematic fields, like for example audio - Skype and Flash are notoriously problematic in this department - but the binary format of an executable strikes me to be a rather bad place to fix this problem.

Reply Parent Score: 6

RE[6]: Always On the Cards
by segedunum on Fri 6th Nov 2009 14:19 in reply to "RE[5]: Always On the Cards"
segedunum Member since:
2005-07-06

Complete nonsense. You still need to compile all that stuff you put in the binary, how is that gonna help you?

That's because you have no idea what the problem actually is, as most people or even developers fannying about on forums like this don't.

The problem is not compilation and I don't know why various idiots around here keep repeating that. It never has been. The cost in time, resources and money has always been in the actual deployment. Packaging for a specific environment, testing it and supporting it for its lifetime is a damn big commitment. If you're not sure what is going to happen once it's deployed then you're not going to do it.

The only practical solution is static linking, which is what most should really do.

You lose all the obvious benefits of any kind of package, architecture or installation management system which ISVs effectively have to start writing themselves, at least in part. We're no further forward than what Loki had to cobble together years ago, and for vendors whose business does not depend on Linux it is something they will never go do. Why would they when other more popular platforms provide what they want?

In addition, it's never entirely clear what it is that you need to statically link and include in your package. You might detect installed system packages manually and then dynamically load in and then fall back to whatever you have bundled statically with your package, but the potential for divergences in that from a support point of view should be very obvious.

For example, you can download and install Opera qt4 statically compiled and it will work regardless of distribution. The same with skype, and some other closed software. So it's not impossible to do it, and yes, you need testing.

Hmmmm. I thought you were complaining about the disk space that FatELF would consume at some point.........

Anyway, just because some can do it it doesn't make it any less crap. It is hardly the road to the automated installation approach that is required.

Reply Parent Score: 2

RE[5]: Always On the Cards
by gustl on Fri 6th Nov 2009 17:02 in reply to "RE[4]: Always On the Cards"
gustl Member since:
2006-01-19

I disagree.

Make a completely static binary and you are good for all distros today and the next 5 years to come.

Should be good enough.

Reply Parent Score: 2

RE[6]: Always On the Cards
by Timmmm on Mon 9th Nov 2009 17:38 in reply to "RE[5]: Always On the Cards"
Timmmm Member since:
2006-07-25

Obviously you have never tried to make a completely static and non-trivial binary. Some libraries (e.g. freetype I think) explicitly do not support static linking. Static linking with the libstdc++ is also a nightmare.

Links:

http://www.baus.net/statically-linking-libstdc++
http://www.trilithium.com/johan/2005/06/static-libstdc/
http://gcc.gnu.org/ml/gcc/2002-08/msg00288.html

and so on...

Reply Parent Score: 1