Linked by Thom Holwerda on Fri 11th Aug 2006 08:33 UTC, submitted by Rob Williams
Gentoo Techgage reviews Sabayon Linux, and conclcudes: "After taking an initial look at Sabayon, I have mixed feelings. Though, I feel more joy when using it than anything negative. One reason this distro may stand out above others is because it takes a difficult base distro, and opens its arms for new users who want to experiment. When it's all said and done, you will have a full functional Gentoo machine after the installation, topped off with a Sabayon coat of paint. What a great looking coat of paint it is."
Thread beginning with comment 151415
To read all comments associated with this story, please click here.
sounds good
by ozonehole on Fri 11th Aug 2006 13:38 UTC
Member since:

I haven't tried Sabayon yet, but it sounds like just what I was looking for. I'm always annoyed by distros that, after being installed, are still lacking many important apps that I use (which means a lot more downloading). Having it all on one DVD would be great.

However, I'm still a little sceptical of anything based on Gentoo (all that compiling nearly drove me nuts when I tried it). Hopefully, Sabayon will eliminate most of the need for that (provided you don't try to update the system with emerge -uD world).

Edited 2006-08-11 13:40

Reply Score: 1

RE: sounds good
by Sphinx on Fri 11th Aug 2006 18:45 in reply to "sounds good"
Sphinx Member since:

If you can read your compiler's output you need a faster machine.

Reply Parent Score: 1

RE: sounds good
by butters on Sat 12th Aug 2006 04:18 in reply to "sounds good"
butters Member since:

I've been a Gentoo user since the beginning. Before devfs support, before ACCEPT_KEYWORDS, and certainly before GRP. The project has come a long way, but it hasn't really innovated outside of its roots. For one thing, in 2001, I figured that if Gentoo would be around in 2006, it would have a binary package repository in addition to the Portage tree.

What Gentoo is missing is a way to harness the power of all of those compute cycles as thousands of Gentoo users compile packages from source. I have two proposals:

PortNet: Gentoo users are encouraged (but not required) to install and run a distributed computing client (e.g. SETI@Home) that uses your spare CPU cycles to help build binary packages for new ebuilds. A cluster of central servers will emerge -b (build package) new ebuilds as they bump to stable using multiple PortNet clients as a distcc cluster. All builds use global CFLAGS and USE flags, regardless of local configurations, so that's what you get when you install binary packages from the PortNet repository.

PortCache: Gentoo users are encouraged (but not required) to build Portage with a USE flag that enables the user to share the results of their local build with the Gentoo community. All emerge processes will imply -b (build package), and when the package is complete, it will be uploaded to the PortCache along with the metadata indicating which USE flags the package was build with (if PortCache doesn't already contain a matching package). Users can ask emerge to query PortCache before building an ebuild, and if it already contains a package built with the desired USE flags, they can skip the build and install the binary package.

Both of these ideas require centralized resources, especially lots of storage and bandwidth. But something like this would really make a big impact on a lot of Gentoo users.

Reply Parent Score: 3

RE[2]: sounds good
by butters on Sat 12th Aug 2006 04:47 in reply to "RE: sounds good"
butters Member since:

Now that I think about it, PortCache could be made much lighter by decentralizing the system and taking a P2P kind of approach. Gentoo users who opt in send metadata only (ebuild version, dependency versions, USE flags, and some sort of client ID) to a central server when they do local builds. Other users who opt in query the server for matching build metadata when they run emerge.

If there's plenty of matches, the server will use something like netselect to find the closest matching peers and instruct those machines to use their spare CPU cycles to build a package from the already merged version. First one done notifies the server, which terminates the package builds on the other machines and arranges a connection between the two peers. After the transfer and merge is complete, the new system's metadata entry is added to the server.

There are probably some security and portability issues. You know, the devil is in the details. But I this might actually be a reasonable system.

No I can't edit... still.

Reply Parent Score: 2