is it as bloated as Fedora Core 2 !!!?? ?? i think they are losing it in terms being sleek. 192 M requirement for linux is really stupid and they should really get their crap down
well at least not as bloated as suse. i was running suse 9.1 on my laptop before FD2 came out and after when i finally installed FD2 there i couldn’t believe how much faster it is!
“well at least not as bloated as suse. i was running suse 9.1 on my laptop before FD2 came out and after when i finally installed FD2 there i couldn’t believe how much faster it is!”
I hope they’re fixing problems where you MUST use their kernel in order for certain features to even work (NFS is the first one that comes to mind). We’ve built our website replacement system on FC2_x86_64 and were EXTREMELY annoyed that tons of stuff broke when we tried to put a vanilla kernel on the system.
“well at least not as bloated as suse. i was running suse 9.1 on my laptop before FD2 came out and after when i finally installed FD2 there i couldn’t believe how much faster it is!”
What FUD. Fedora Core 2 had it’s butt handed back to itself by Windows XP SP1 and Mandrake 10 in recently published benchmarks. SuSe 9.1 is maybe a tad slower than Mandrake 10 (only because SuSe starts more daemons at boot-time than MDK), and it runs circles around Core 2. If you disable a handful of the bloated, superfluous services that SuSe starts at boot (an elementry procedure), it becomes one of the fastest ootb Linux distro on the market.
Lemme guess. You use fluxbox and disable every service you can on Arch, so it is “fast”. Then you go on forums and complaing about the speed of the default install of fedora and gnome.
And now for the people complaining about “bloat”. What are you talking about? Too many packages? Too many features? Bad UI? Lack of speed in GNOME? KDE? gedit? Maybe if you actually phrased your question so they it applied to the real world, you might actually get somewhere.
Nope, I use Gnome + KDE I mean FC2 is just slower, that’s all. Third sentence, yes
Bloat- slow startup, slow interface (maybe Gnome’s fault though). I guess I don’t mean bloat in the traditional sense though…except when I apply it to KDE.
How can you say that when the Linux systems have been allowing XP bootups for so long. When Fedora Core 1 itself had no problems allowing XP to boot. Mandrake, SuSE, FreeBSD, even Solaris doesn’t seem to have a problem, yet Core 2 does and it is suddenly a Windows XP bug rather than a Fedora Core 2 bug?
I believe the word you are looking for is “speed”. As in, it takes 2 minutes to boot. Or Nautilus takes 5 seconds to startup.
The point is, people have heard so much about speed/bloat from the “gentoo trolling/redhat bashing/mandrake is only for n00bs/rpms are evil” people that you have to be specific in you claims and do some serious testing… which takes more time then shooting off on a forum.
Actually it is a BIOS bug. If the BIOS is not set up to correctkly handle LBA mode wiothout being told to, this will happen. It happens with all distros. Set your BIOS properly and there is nothing wrong. FC2 dual boots with Windows XP just fine.
I installed Slack 10 FULL and got no problems. No slow-downs. At the same time I had FC2 installed on the same computer. It’s 2-3 times slower. Even in boot times.
FC2 ran great, on my system, in fact it ran better in some cases than Slackware 9.1 on the same box(updated to 2.6 kernel)….
Stop bashing the distro just because your system sucks, or you don’t know how to deal with the small performance loss you get in exchange for ease of setup.
I just put Slack 10 on my Dell laptop and it does boot up pretty quickly compared to FC2 (not as fast as prior Slack versions, though, for some reason). Speed of resizing windows etc. in GNOME is a bit faster with Slack than FC2, which I have on another partition. But, for many things, it is pretty comparable.
Why wasn’t the bloat commented moderated down? Just because there is a lot of software available on the disto it is by no means mandatory to install all of it. It’s sad to see such comments reviewed and not moderated down.
“Why wasn’t the bloat commented moderated down? Just because there is a lot of software available on the disto it is by no means mandatory to install all of it. It’s sad to see such comments reviewed and not moderated down.”
I corrected my statement afterward.
BTW “Just because there is a lot of software available on the disto it is by no means mandatory to install all of it” is not the definition of bloat at all.
Fdisk can figure out the CHS values just fine, parted apparently can’t.
So when you use parted the CHS part of the partition descriptor is wrong (As far as Windows is concerned anyhow), but the LBA part is ok. This is why switching to LBA in BIOS works.
However switching to LBA in the BIOS is _NOT_ a long term solution. As soon as you try to alter a partition using the Windows disk manager you’ll mess up the partition table, because it calculates the new partition table based on the screwed up CHS values parted left behind. End result is a system that won’t boot (Or more likely a grub prompt that won’t boot anything).
The real solution is to setup your partitions beforehand with fdisk using something like Knoppix and ignore the partitioning step of the install, or update the geometry stored by Windows after parted has finished screwing around (Not for the weak of heart).
Either way it’s still the parted developers’ fault for busting something that’s been working for years IMHO. Unless something changed in the kernel I don’t understand why they changed things.
“Either way it’s still the parted developers’ fault for busting something that’s been working for years IMHO. Unless something changed in the kernel I don’t understand why they changed things.”
kernel doesnt manage the partition table as of 2.6. its solely dependant on parted
I guess what is referred to as bloat is the memory footprint associated with using a modern Gnome/KDE desktop. This IS a problem.
Interrestingly, a friend of mine who is a C developer for a company that compiles & delivers the same software for several platforms (Solaris, HP-UX, SGI, OSX, ..) mentioned that the version compiled with gcc used much more memory than the others (the version compiled with SGI:s compiler had the smallest footprint, the gcc one had nearly the double)
Anyone with insight on this matter? Is the commonly-used compiler (gcc) to blame?
I run FC2 on a P3 iGZ with 256 MG ram. Once I boot and log into Gnome and run System Monitor I have 91 MB RAM used. This is using all the default options but disabling the services I don’t need (canna, sendmail, isdn, apmd, nfs, etc). Mostly all the stuff a desktop PC with a dial up conection will never need. So much for a bloated system, this is normal customization, easily done with Gnome tools launched from the panel. 91 MB is quite low IMHO.
I also compiled a 2.6.7 kernel.org kernel and that is the one I use. I removed all the drivers I don’t need, dropped suport for > 1 GB RAM, removed all filesystem support except ext2/3 and fat and CD filesystems types, removed IP6 and SELinux.
Start up, log into gnome, launch System Monitor: 65 MB used. Granted recompiling the kernel is advanced (but quite easy using the tools).
“Anyone with insight on this matter? Is the commonly-used compiler (gcc) to blame?”
not really. gcc is getting better. with the switching to ssa on 3.5 its bound to be pretty fast, however gcc is a common frontend to multiple languages and supports a huge number of platforms and has no competition at all on that area and hence stuff like intel and sgi compilers which target one specific platform or arch is bound to be faster
You know, I read everywhere about how Red Hat was touting the fast release cycle. This can’t be a good thing for users though… Every 6 months having to download more ISO’s can’t be a good thing.
I use Debian, I use the Unstable branch, and its probably more stable then Fedora due to Fedora’s seemingly rushed developement. Its just as up to date also, again, if not, moreso… I just don’t understand how burning 4 cd’s every 6 months is a good thing.
To each there own I suppose, I know I would be annoyed by it if it was effecting me though…
Thats all well and good, but I WANT to upgrade… just annoying burning more ISO’s…
I haven’t had to get new ISO’s since I started using Debian. There is no need due to APT upgrading in place. Fedora developers strongly recommend against this thoug, and don’t test to make sure it will work fine.
I don’t know, it just seems terribly annoying to me. Don’t get me wrong, I will garentee I try Fedora Core 3, I just really don’t like the release cycle at all.
“This can’t be a good thing for users though… Every 6 months having to download more ISO’s can’t be a good thing.
”
who is forcing you. skip releases or dont upgrade. just stick with that you like
>> well, before FC came out this was already a given. they will have 6 month release cycles. this is not red hat enterprise linux, it’s fedora core!
FC’s purpose is to test stuff before RH puts them into enterprise linux, so it’s obvious that movement/upgrading on FC will be a regular activity for many
First I heard about people having problem booting up in linux after they had installed xp, and then I heard the oposite. So, I just thought it was the communities blaming each other. =]
I will admit upfront I am not a Linux Guru. However, I recently installed Slackware 10 and did some basic Memory footprint tests. In each case I booted up fired off a terminal and Top. Fvwm2 fires up at about 80M, Fluxbox fires up at about 80M, XFCE fires up at about 100M, Gnome Fires up at about 162M, and Kde fires up at about 178M. These are not exact figures but they are relatively close. Fedetx you are telling me you can get Gnome to boot in 91M?? and even more incredibly by doing a custom kernal you can get it to boot Gnome in 65M??? Can _ANYONE_ confirm this?… if so i guess my days of using near default configs are over. If i can weedle those kind of memory footprint savings then I really need to get busy. Im genuinely curious about this.
Well, I just checked again yesterday and after normal boot and after log in to gnome and starting System Monitor (a GTK app) I have 60MB.
Remeber I turned off a lot of services and dumped a lot of kernel code, specially all network drivers (I use dial up), ip6 (not needed), selinux, XFS, JFS and other filesystems. I can send you my kernel .config file with the compilation options.
is it as bloated as Fedora Core 2 !!!?? ?? i think they are losing it in terms being sleek. 192 M requirement for linux is really stupid and they should really get their crap down
well at least not as bloated as suse. i was running suse 9.1 on my laptop before FD2 came out and after when i finally installed FD2 there i couldn’t believe how much faster it is!
Don’t install stuff you don’t need Just cuz it has 4 CDs doesn’t mean you have to install them all.
“well at least not as bloated as suse. i was running suse 9.1 on my laptop before FD2 came out and after when i finally installed FD2 there i couldn’t believe how much faster it is!”
So … which did you feel was fater SUSE or FC-2?
-D
I hope they’re fixing problems where you MUST use their kernel in order for certain features to even work (NFS is the first one that comes to mind). We’ve built our website replacement system on FC2_x86_64 and were EXTREMELY annoyed that tons of stuff broke when we tried to put a vanilla kernel on the system.
“well at least not as bloated as suse. i was running suse 9.1 on my laptop before FD2 came out and after when i finally installed FD2 there i couldn’t believe how much faster it is!”
What FUD. Fedora Core 2 had it’s butt handed back to itself by Windows XP SP1 and Mandrake 10 in recently published benchmarks. SuSe 9.1 is maybe a tad slower than Mandrake 10 (only because SuSe starts more daemons at boot-time than MDK), and it runs circles around Core 2. If you disable a handful of the bloated, superfluous services that SuSe starts at boot (an elementry procedure), it becomes one of the fastest ootb Linux distro on the market.
Doesn’t it explain it self?
It’s a XP bug, not a linux bug?
“Arch Linux speed + FC easiness = bliss”
Lemme guess. You use fluxbox and disable every service you can on Arch, so it is “fast”. Then you go on forums and complaing about the speed of the default install of fedora and gnome.
And now for the people complaining about “bloat”. What are you talking about? Too many packages? Too many features? Bad UI? Lack of speed in GNOME? KDE? gedit? Maybe if you actually phrased your question so they it applied to the real world, you might actually get somewhere.
Nope, I use Gnome + KDE I mean FC2 is just slower, that’s all. Third sentence, yes
Bloat- slow startup, slow interface (maybe Gnome’s fault though). I guess I don’t mean bloat in the traditional sense though…except when I apply it to KDE.
I’m not a troll, really!
“Doesn’t it explain it self?
It’s a XP bug, not a linux bug?”
Yup, that’s the attitude of the Fedora developers too, oh well…
>Doesn’t it explain it self?
>It’s a XP bug, not a linux bug?
How can you say that when the Linux systems have been allowing XP bootups for so long. When Fedora Core 1 itself had no problems allowing XP to boot. Mandrake, SuSE, FreeBSD, even Solaris doesn’t seem to have a problem, yet Core 2 does and it is suddenly a Windows XP bug rather than a Fedora Core 2 bug?
That’s some fine logic!
Hector
I believe the word you are looking for is “speed”. As in, it takes 2 minutes to boot. Or Nautilus takes 5 seconds to startup.
The point is, people have heard so much about speed/bloat from the “gentoo trolling/redhat bashing/mandrake is only for n00bs/rpms are evil” people that you have to be specific in you claims and do some serious testing… which takes more time then shooting off on a forum.
“>Doesn’t it explain it self?
>It’s a XP bug, not a linux bug?”
Actually it is a BIOS bug. If the BIOS is not set up to correctkly handle LBA mode wiothout being told to, this will happen. It happens with all distros. Set your BIOS properly and there is nothing wrong. FC2 dual boots with Windows XP just fine.
Didn’t see it.
OK stop swearing plases
has anyone installed this? any idea?
I installed Slack 10 FULL and got no problems. No slow-downs. At the same time I had FC2 installed on the same computer. It’s 2-3 times slower. Even in boot times.
FC2 ran great, on my system, in fact it ran better in some cases than Slackware 9.1 on the same box(updated to 2.6 kernel)….
Stop bashing the distro just because your system sucks, or you don’t know how to deal with the small performance loss you get in exchange for ease of setup.
If you want speed, use Gentoo.
I just put Slack 10 on my Dell laptop and it does boot up pretty quickly compared to FC2 (not as fast as prior Slack versions, though, for some reason). Speed of resizing windows etc. in GNOME is a bit faster with Slack than FC2, which I have on another partition. But, for many things, it is pretty comparable.
FC-2 would not even let me connect to the internet when I first installed it. I ended up going back to Manrake on the machine I have Linux on.
Why wasn’t the bloat commented moderated down? Just because there is a lot of software available on the disto it is by no means mandatory to install all of it. It’s sad to see such comments reviewed and not moderated down.
“Why wasn’t the bloat commented moderated down? Just because there is a lot of software available on the disto it is by no means mandatory to install all of it. It’s sad to see such comments reviewed and not moderated down.”
I corrected my statement afterward.
BTW “Just because there is a lot of software available on the disto it is by no means mandatory to install all of it” is not the definition of bloat at all.
No, it’s not an XP bug.
It’s a parted bug.
Fdisk can figure out the CHS values just fine, parted apparently can’t.
So when you use parted the CHS part of the partition descriptor is wrong (As far as Windows is concerned anyhow), but the LBA part is ok. This is why switching to LBA in BIOS works.
However switching to LBA in the BIOS is _NOT_ a long term solution. As soon as you try to alter a partition using the Windows disk manager you’ll mess up the partition table, because it calculates the new partition table based on the screwed up CHS values parted left behind. End result is a system that won’t boot (Or more likely a grub prompt that won’t boot anything).
The real solution is to setup your partitions beforehand with fdisk using something like Knoppix and ignore the partitioning step of the install, or update the geometry stored by Windows after parted has finished screwing around (Not for the weak of heart).
Either way it’s still the parted developers’ fault for busting something that’s been working for years IMHO. Unless something changed in the kernel I don’t understand why they changed things.
“Either way it’s still the parted developers’ fault for busting something that’s been working for years IMHO. Unless something changed in the kernel I don’t understand why they changed things.”
kernel doesnt manage the partition table as of 2.6. its solely dependant on parted
I guess what is referred to as bloat is the memory footprint associated with using a modern Gnome/KDE desktop. This IS a problem.
Interrestingly, a friend of mine who is a C developer for a company that compiles & delivers the same software for several platforms (Solaris, HP-UX, SGI, OSX, ..) mentioned that the version compiled with gcc used much more memory than the others (the version compiled with SGI:s compiler had the smallest footprint, the gcc one had nearly the double)
Anyone with insight on this matter? Is the commonly-used compiler (gcc) to blame?
I run FC2 on a P3 iGZ with 256 MG ram. Once I boot and log into Gnome and run System Monitor I have 91 MB RAM used. This is using all the default options but disabling the services I don’t need (canna, sendmail, isdn, apmd, nfs, etc). Mostly all the stuff a desktop PC with a dial up conection will never need. So much for a bloated system, this is normal customization, easily done with Gnome tools launched from the panel. 91 MB is quite low IMHO.
I also compiled a 2.6.7 kernel.org kernel and that is the one I use. I removed all the drivers I don’t need, dropped suport for > 1 GB RAM, removed all filesystem support except ext2/3 and fat and CD filesystems types, removed IP6 and SELinux.
Start up, log into gnome, launch System Monitor: 65 MB used. Granted recompiling the kernel is advanced (but quite easy using the tools).
Also, the boot process is faster (30%-50%).
not everyone is bashing for no reason
i’ve tried red hat and mandrake and i noticed the bloat
lots of services running by default
giant instalation
problems with rpm and corruption of the rpm database
so i switched to slackware =;)
the drive geometry bug is also a big minus
i would not recomend fedora to anyone after it
the right thing they should have done is to release new ISO with the fix and put a BIG warning on their homepage
“Anyone with insight on this matter? Is the commonly-used compiler (gcc) to blame?”
not really. gcc is getting better. with the switching to ssa on 3.5 its bound to be pretty fast, however gcc is a common frontend to multiple languages and supports a huge number of platforms and has no competition at all on that area and hence stuff like intel and sgi compilers which target one specific platform or arch is bound to be faster
“the drive geometry bug is also a big minus
i would not recomend fedora to anyone after it
the right thing they should have done is to release new ISO with the fix and put a BIG warning on their homepage”
and it affected mandrake and suse too. so?
You know, I read everywhere about how Red Hat was touting the fast release cycle. This can’t be a good thing for users though… Every 6 months having to download more ISO’s can’t be a good thing.
I use Debian, I use the Unstable branch, and its probably more stable then Fedora due to Fedora’s seemingly rushed developement. Its just as up to date also, again, if not, moreso… I just don’t understand how burning 4 cd’s every 6 months is a good thing.
To each there own I suppose, I know I would be annoyed by it if it was effecting me though…
Just my 2 cents.
“This can’t be a good thing for users though… Every 6 months having to download more ISO’s can’t be a good thing.
”
who is forcing you. skip releases or dont upgrade. just stick with that you like
Thats all well and good, but I WANT to upgrade… just annoying burning more ISO’s…
I haven’t had to get new ISO’s since I started using Debian. There is no need due to APT upgrading in place. Fedora developers strongly recommend against this thoug, and don’t test to make sure it will work fine.
I don’t know, it just seems terribly annoying to me. Don’t get me wrong, I will garentee I try Fedora Core 3, I just really don’t like the release cycle at all.
My Win XP just fine when I install FC2. I know there is a bug about that, but for FC2 test realese.
Has anyone tried it installing this yet? Whats different from an updated FC2 install?
No, I haven’t tried it yet but here are the release notes:
http://download.fedora.redhat.com/pub/fedora/linux/core/test/2.90/i…
“This can’t be a good thing for users though… Every 6 months having to download more ISO’s can’t be a good thing.
”
who is forcing you. skip releases or dont upgrade. just stick with that you like
>> well, before FC came out this was already a given. they will have 6 month release cycles. this is not red hat enterprise linux, it’s fedora core!
FC’s purpose is to test stuff before RH puts them into enterprise linux, so it’s obvious that movement/upgrading on FC will be a regular activity for many
Thanks people for clearing that up.
First I heard about people having problem booting up in linux after they had installed xp, and then I heard the oposite. So, I just thought it was the communities blaming each other. =]
I will admit upfront I am not a Linux Guru. However, I recently installed Slackware 10 and did some basic Memory footprint tests. In each case I booted up fired off a terminal and Top. Fvwm2 fires up at about 80M, Fluxbox fires up at about 80M, XFCE fires up at about 100M, Gnome Fires up at about 162M, and Kde fires up at about 178M. These are not exact figures but they are relatively close. Fedetx you are telling me you can get Gnome to boot in 91M?? and even more incredibly by doing a custom kernal you can get it to boot Gnome in 65M??? Can _ANYONE_ confirm this?… if so i guess my days of using near default configs are over. If i can weedle those kind of memory footprint savings then I really need to get busy. Im genuinely curious about this.
Peace,
The Unix memory model isn’t like that. Have you never heard of this thing called “shared memory”? Or buffers and cache?
If you want to continue to live in ignorance and think that opening 5 Firefox processes will fill up your entire RAM and swap, that’s your choice.
just annoying burning more ISO’s…
Copy kernel/initrd to /boot.
Update you grub/lilo.conf .
Boot with “askmethod” parameter, next choose “Hard Drive” and give the path to the iso image.
btw, anaconda also provide http, nfs, ftp, … support.
hi linux(er)
in a while a lot of ppl make a big argument about speedy and boottime, why not to give BSD a try?
I am quite happy with freebsd, you dun have to much concerned with kinda of stuffs..gnome or kde or blabla..
a light weight DE like fluxbox/xfce runs very smooth on bsd and its kernel keeps small, simple scripts, that makes you happy on a quite outofdate box
otherwise, slackware also is a good distro in linux world. give it a try, very similar to bsd.
do not be so cutting-edges as you dunn have a modern PC box in hand
with freebsd, it is easy to install just a mini freebsd then you can install more programs later using ports. save time and learn more..
Well, I just checked again yesterday and after normal boot and after log in to gnome and starting System Monitor (a GTK app) I have 60MB.
Remeber I turned off a lot of services and dumped a lot of kernel code, specially all network drivers (I use dial up), ip6 (not needed), selinux, XFS, JFS and other filesystems. I can send you my kernel .config file with the compilation options.