Linked by Thom Holwerda on Thu 5th Nov 2009 23:05 UTC
Linux As we all know, Mac OS X has support for what is called 'fat binaries'. These are binaries that can carry code for for instance multiple architectures - in the case of the Mac, PowerPC and x86. Ryan Gordon was working on an implementation of fat binaries for Linux - but due to the conduct of the Linux maintainers, Gordon has halted the effort.
Thread beginning with comment 393163
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[3]: Always On the Cards
by asdf on Fri 6th Nov 2009 04:41 UTC in reply to "RE[2]: Always On the Cards"
asdf
Member since:
2009-09-23

So 40 binaries to cover just the basics, if you app is just a little bit more complicated and has Hardware discovery and a Daemon that gets activated at install time then you will have to make your binaries not only distribution dependent but also dependent on the version of the distribution.


Going back to the subject, how would fat binary solve any of the above problems? If anyone really wants single binary for different archs for whatever reason, there's always "#!" and if you play smart with it, you can pack everything into a single file and unpack and execute the right binary on demand.

The fat/universial binary thing is just some weird thing apple did. Maybe they didn't have common scripting mechanism across old and new systems. Maybe it just fitted better with their multi data stream file system. There's no reason to do it the same way when about the same thing can be achieved in much simpler way and it's not even clear whether it's a good idea to begin with.

Reply Parent Score: 2

RE[4]: Always On the Cards
by Slambert666 on Fri 6th Nov 2009 07:09 in reply to "RE[3]: Always On the Cards"
Slambert666 Member since:
2008-10-30

Going back to the subject, how would fat binary solve any of the above problems? If anyone really wants single binary for different archs for whatever reason, there's always "#!" and if you play smart with it, you can pack everything into a single file and unpack and execute the right binary on demand.


Sure you can brew together all kinds of custom solutions that could / should / would maybe work, so now you have just doubled the number of problem areas:

1. The program itself.
2. The install system for the above program.

It is so sad that today it is much, much easier to make a new distro (an opensuse or ubuntu respin) that contains your program as an addition, than it is to make an installer that will work for even a single distro across versions ....

A fat binary is not a complete solution, it is not even a partial solution but it is perhaps a beginning to a solution.

The fat/universial binary thing is just some weird thing apple did. Maybe they didn't have common scripting mechanism across old and new systems. Maybe it just fitted better with their multi data stream file system. There's no reason to do it the same way when about the same thing can be achieved in much simpler way and it's not even clear whether it's a good idea to begin with.


Maybe they just wanted the install process to be easier for their users .... and ISV's ...

Reply Parent Score: 1

RE[5]: Always On the Cards
by asdf on Fri 6th Nov 2009 08:06 in reply to "RE[4]: Always On the Cards"
asdf Member since:
2009-09-23

Sure you can brew together all kinds of custom solutions that could / should / would maybe work, so now you have just doubled the number of problem areas:

1. The program itself.
2. The install system for the above program.


Modifying the binary format, updating kernel and then all the tool chains, debuggers and system utilities are gonna be much harder than #2.

It is so sad that today it is much, much easier to make a new distro (an opensuse or ubuntu respin) that contains your program as an addition, than it is to make an installer that will work for even a single distro across versions ....


Sure, agreed. Linux is facing different problems compared to proprietary environments.

A fat binary is not a complete solution, it is not even a partial solution but it is perhaps a beginning to a solution.


But here I completely fail to see the connection. Really, fat binary support in kernel or system layers doesn't make whole lot of difference for the end users and we're talking about the end user experience, right?

The real problem is not in the frigging binary file format, it's in differences in libraries, configurations and dependencies and fat binary contributes nothing to solving that. Also, it's a solution with limited scalability. It sure works fine for apple but it will be extremely painful to use with increasing number of binaries to support.

Reply Parent Score: 3

RE[5]: Always On the Cards
by smitty on Fri 6th Nov 2009 08:07 in reply to "RE[4]: Always On the Cards"
smitty Member since:
2005-10-13

If 3rd parties don't know how to make a good cross-distro package, it's either their own fault or a limitation of Linux that FatELF would not solve.

A good post from another site:

There seem to be two main reasons to have fatElf as advocated by its proponents:
1. distributing binaries for different architectures bundled together
2. distributing an executable and the libraries it depends on in a single package.

Both of these can be achieved today by simply putting all the binaries and all the libraries to one directory tree plus a simple script to choose the right version.

I hear cries about how difficult it is to deploy on linux because of all the distro incompatibilities. Well, lets take a look at a concrete example, such as WorldOfGoo.
It is distributed as a deb, rpm and a tar.gz. This is the relevant content of the tar.gz:

d icons
d libs32 \_ contains libogg, libvorbis, libSDL
d libs64 /
d res - game data
f readme.html
f WorldOfGoo
f WorldOfGoo.bin32
f WorldOfGoo.bin64

The WorldOfGoo shell script basically only does:

Code:

MACHINE=`uname -m`
if [ "$MACHINE" = x86_64 ]
then
LIBS=./libs64
BIN=./WorldOfGoo.bin64
else
LIBS=./libs32
BIN=./WorldOfGoo.bin32
fi

export LD_LIBRARY_PATH=$LIBS:"$LD_LIBRARY_PATH"
$BIN $@

The rpm and deb differs from the tarball only in that they put the above in /opt/WorldOfGoo and install some shortcuts and icons.

You cant say it takes a lot of effort to 'maintain' that?

Sure a shell script, however simple and posix-compatible it tries to be, is somewhat error-prone (its easy to use some non-standard-compliant features and make assumptions). Thats why I would rather see a project that factors out the operations usually done in such scripts (other common operation ive seen is locating certain directories) and provides them in a standardized way (some kind of an extended symlink framework maybe). This certainly doesn't look like a kernel problem at all.

Somebody asked why isnt it done? Maybe because status quo works as good as it does!


Edited 2009-11-06 08:11 UTC

Reply Parent Score: 6

RE[5]: Always On the Cards
by sbenitezb on Fri 6th Nov 2009 09:18 in reply to "RE[4]: Always On the Cards"
sbenitezb Member since:
2005-07-22

A fat binary is not a complete solution, it is not even a partial solution but it is perhaps a beginning to a solution.


Complete nonsense. You still need to compile all that stuff you put in the binary, how is that gonna help you? The only practical solution is static linking, which is what most should really do. For example, you can download and install Opera qt4 statically compiled and it will work regardless of distribution. The same with skype, and some other closed software. So it's not impossible to do it, and yes, you need testing.

Reply Parent Score: 2

RE[5]: Always On the Cards
by gustl on Fri 6th Nov 2009 17:02 in reply to "RE[4]: Always On the Cards"
gustl Member since:
2006-01-19

I disagree.

Make a completely static binary and you are good for all distros today and the next 5 years to come.

Should be good enough.

Reply Parent Score: 2