In the dawn of the renovation of freedesktop.org‘s web site, David Zeuthen announced the release of HAL 0.1. HAL is an implementation of a hardware abstraction layer, as defined by Havoc Pennington’s paper. It encompasses a shared library for use in applications, a daemon, a hotplug tool, command line tools and a set of stock device info files. Carlos Perelló Marín also announced the design of a similar concept, but it is expected the two projects to merge. More people are encouraged to join this innovative project. Elsewhere, Gnome’s Seth Nickell is giving us a first taste of his effort to replace the Init system.
I dont understand all the effort people put into creating new graphical interfaces . In my opinion the Linux CLI works just fine and does not require so much recources.
bsaremi
http://www.bitsel.com
> I dont understand all the effort people put into creating new graphical interfaces
freedesktop.org is NOT an effort to create a new DE. It is an effort to unit the *backends* of the existing DEs, so they share the same protocols on the *backend* of their architectures. This way, you don’t have to create bookmarks or MIME types or menus twice on your DEs, they will be recognized and used automatically, no matter what DE you are using! It is all about interoperability and integration.
> the Linux CLI works just fine and does not require so much recources.
Yeah, but you can’t see our nifty OSNews logo with lynx.
ok…
your right ..
Seems to be especially usefull for people who change their DE s very often …
and your right again … It would be a shame not beeing able to see your Logo .. By the way … my compliment on your site … its very well designed …
Babak Saremi
>Seems to be especially usefull for people who change their DE s very often
Indeed. However, it is also important if you are running apps that were built with this or another toolkit. For example, if you are on Gnome and you launch konqueror, because you happen to like konqueror, it would be nice to see your bookmarks from the main repository. This sort of integration and unification freedesktop.org is aiming, and NOT to make gnome and kde look and feel the same.
How comes I don’t like this idea ?
HAL is something that should be done between HARDWARE and KERNEL e.g. an abstraction layer that the Kernel uses to get informations about the Hardware. Most Microkernel systems do this in a modern way like MorphOS for example
Hardware <—-> HAL <—-> Kernel <—-> Application
The correct way this has to be done. Keeping the HAL as something on it’s own then it’s easier to port the Kernel to e.g. other Hardware architecture with different Hardware stuff inside it. What the freedesktop stuff tries to achieve is.
Hardware <—-> Kernel <—-> HAL <—-> Application
That is, the Kernel boots, checks all your Hardware on your System, boot’s into GNOME and there are these tools that secretly accesses Kernel related resources. I have only quickly read the Link above so don’t blame me here but reading ‘There are tools accessing to get Hardware informations (not exactly quotes)’ gave me the idea of this to be the case.
Anyways regardless of what I think this is plain wrong, specially on Linux. The idea is good, the implementation not. I simply don’t see any benefits to put a HAL layer ontop of an exiting Kernel which had booted and initialized the Hardware already. I think everyone with a little Knowledge in OS writing will agree here. This solution is looks like a hack to make something work that usually should be done completely differently. At least on a different layer.
Btw: I am using QNX right now and write this text with Voyager. QNX rocks.
Say what you want about Linux, but it’s always at the forefront of technology with bleeding edge ideas.
By your example oGalaxy, you tie Gnome to a specific OS, because it will need the linux kernel in order to talk to HAL.
By freedesktop.org’s implementation, HAL is kernel-intependant (as long as it is ported that is).
Plus, there is no reason why there shouldn’t be two kinds of HAL’s, like this, to serve different purposes (I am not talking about duplication here, because Linus already wants to add real hotplug support for Linux kernel 3.0):
Hardware <—-> HAL-1 <—-> Kernel <—-> HAL-2 <—-> Application
The first HAL is low level, the second one is high level.
Eugenia hits the nail on the head with her comment.
This HAL is designed to reduce the work required by a DE to support different kernels and hardwares. At the moment, a lot of porting must occur to have Gnome working an systems as similar as the *BSDs and Linux.
The Freedesktop HAL will cut out this work not just for Gnome but for all future DEs. This is a very good thing and part of the standardisation that needs to occur for FLOSS projects and developers to be able to work cooperatively and for FLOSS to be increasingly commercially competitive as a workstation option.
> By freedesktop.org’s implementation, HAL is kernel-intependant (as long as it is ported that is).
Exactly what I understood. Thanks for confirming this. Well I understand that GNOME should be Hardware independant and I also do understand the needs of accessing the Hardware confortable under GNOME. But please don’t understand me wrong here. All this is overload due to the implementation. I fullheartly agree to the idea but I don’t agree with the implementation. This is only a hack around some stuff to make things work on many plattforms.
Look, we already reached a point in GNOME where things are quite slow by some time e.g. people with limited Hardware have hard times using GNOME and as you once figured yourself. When moving a lot of files on your Harddisk from place A to place B and while this is doing go and open GNOME-Terminal for example. It takes many seconds (20-30 seconds) to show up due to loading all sorts of libraries and these libraries require again other libs and so on. If we put another step of complexity between this then maybe it would be better to complete the entire step and write a NEW OS around GNOME.
An OS that has a HAL system, an OS that include the GNOME required VFS layer in the core, an OS that works on all Hardware but with the correct needs and requirements of a Desktop System.
Cool would be:
Hardware <–> Kernel (HAL, VFS, Framebuffer all for Desktop needs) <–> Desktop
What we have now is:
Hardware <–> Kernel (HAL) <–> (X11, VFS layer for GNOME, HAL and more stuff and wrapper for other Plattforms) <–> Desktop
and exactly this stuff is pain. As more we put in the paranthesis (?) named entries, as more it affects speed and other stuff. Look at BeOS, MorphOS or QNX solutions where all thesr things are working quite closely to and around the Kernel. This explains the huge speed of the resulting Desktop.
If the HAL is in between the Kernel and Applications isn’t it really a Kernel Abstraction Layer?
oGALAXYo is right…
Adding more layer has 3 problem
1) Speed…(more system calls.. we all know that system calls are inversely proportional to speed)
2) Dependency…(DE depends on X depends on glib depends on HAL depends on Kernel which depends on hardware)
3) We are assuming that each layer is independent of each other (ie upgrading HAL will not effect gnome)
Who cares about a lot of syscalls? This is for hardware discovery and hotplugging, not filling the video ram in a fast way or whatnot. How often do you add or remove a device, who cares if it takes 0.01 seconds longer because we don’t talk directly to the kernel. The other problem is, not all hardware is available trough the kernel, gphoto2 devices for example, for which applications had to poll yet another library. The HAL system aims to unify all this and device centric applications can be developed way faster and easier afterwards.
“Yeah, but you can’t see our nifty OSNews logo with lynx. ”
W3m workd fine. Seen it, done it.
I wonder if Havoc can address these claims about speed concerns.
I am already concerned that I am running distro after distro that is slower than win2k as a desktop.
I am talking about app launch and responsivesness, and no I don’t believe the 2.6 or 3.0 linux kernel will solve these problems.
I agree with oGALAXYo if the concerns about speed are correct.
But you can see the logo with links in graphical mode, ever tried that?
The freedesktop.org HAL is primarily a method to unify setup of pluggable (or maybe even non-pluggable) devices. Once the setup has completed, HAL is basically out of the picture: You’ll use the same accessor libraries to do the actual work that you’ve always used. There is no additional layer during normal operation.
Now in response to oGALAXYo’s concerns: Sure, it would be nice if HAL functionality, working framebuffer + 3D acceleration (possibly using binary drivers), etc. could all be integrated into all kernels.
However, the freedesktop.org people live in the real world, not in some sort of academic dreamland. They actually get something done, and that’s obviously a Good Thing.
When in doubt, add a level of abstraction to the system. Or something like that.
> However, the freedesktop.org people live in the real world,
> not in some sort of academic dreamland. They actually get
> something done, and that’s obviously a Good Thing.
Is it ? The sentence ‘someone who provides the code leads the development’ can be a serious WRONG sentence. If we let everyone proceed that way and excuse all things happening with such sentences then we end in the biggest mess ever. We sould first sit down, talk and weight the real life benefits with theoretical hackish solutions. I doubt that these people at freedesktop.org do everything right. They are humans like we all and they trap themself into shit quite often enough.
A real life solution shouldn’t be LAME HACKS, if we add more LAME HACKS around and inside GNOME or any other Desktop solution then we gain nothing. There is no need to add another layer ontop of an already existing layer and I bet even you can understand these simple explainations that I provided with my earlier replies. Where will it all end, when we simply sit down and assume that the people at freedesktop.org do everything right ? Who judges that the stuff they decide IS right if not from people outside freedesktop.org with real life (e.g. 20 years and more) of experience in these areas ? If we sit down looking the shit they sometimes put in a Desktop then where will we end in a few years ? We add one technology after another into GNOME or KDE without any real life benefits. We hack stuff ontop of other existing solutions only to get the stuff done that we like because we can’t do it differently because the OS where this Desktop belongs to permits these capabilities from Desktop level. We are sitting here somehow paralyzed and let them proceed with all the bloat in the Desktop which we can’t do anything. They excuse the needs of it today and we let the proceed, tommorow they come up with yet another idea which we criticise as well and we do let them proceed again.
What do you user want ? A working Desktop that works smoothly fast, loads up quickly, where the apps look consistent and where you see a good cost/value for what you get or do you want a technological experiemnt where ALL technolgical ideas went into, where with every increasing itterative number of Desktop release more and more techological ideas went in ? Where the dependency of: Application <-> Library dependency AND Library <-> Library dependency increases more and more ?
Systems like MorphOS, QNX, BeOS (to name some) show us in best bells and whistles how you can do a nice Desktop System the right way. As soneone else already stated, we will hit the limitation really soon, GCC is nearly complete for intel architectures, Linux the Kernel will grow permanently, other apps around the GNU System increase as well GNOME will increase and we need at least 1,6 Ghz to have a normal responsive operation and soon we require 2,6 Ghz only to achieve the same speed feel that we have today. While still other systems as named above requires less resources and less CPU speed but still respond faster than GNOME for example. Double click on an Icon in MorphOS for example on a G3/600MHZ and it bombs up before you ended the double click. Do the same on GNOME and you wait some seconds.
Maybe worth to think about ? What’ya say ? But getting deeper into this requires a lot more understanding.
I take from his post, he hasn’t read the paper it was based on, or pretty much any of the news. This is merely keyboard commentary that is speculative. I can do just as well…
HAL is uber-fast.
There, now I presented has much fact that GALAXY has.
In other news, people always complain about speed. I always hear talk of the lowest common denominator. I also hear people on 2+ Ghz machines saying “Oh, this is so slow I can do anything…”.
My thoughts on this are rather simple:
1. Processors are getting faster. This allows for better software design. Sure, it may make the software bigger/larger, but it also makes the software better.
2. People on slow hardware should NOT expect to use bleeding edge stuff. Should the world suffer because people are still holding on to their 386’s as desktop machines?
3. Finally, if things run slow on your 2+Ghz machines, you don’t have a clue. And you have all seen these posts. These are the guys that a) obviously know what they are doing (sarcasm), b) provide you with a complete rundown of their machine, which is usually pretty fast, and c) complain that everythings runs so SLOW and it’s impossible to get any work done.
My assumption is that 90% of the time either they are lying, are morons, or are the type to expect Photoshop to open in 0 ms (making them unrealistic), or the worst, they have every service imaginable running, allowing with a few milllion apps with every single chat service, along with music and a movie playing, while utilizing 20 different high-end marked “will slow you machine down” eye-candys.
After having read the paper a while back, HAL’s goals are very good, and will have neglible impact on speed. The benefits are much greater than any potential downside.
Looks like these guys are catching up with Dave Cutlers ideas introduced into mainstream computing 12 years ago. HAL and Service Manager are feartures of wnt since the beginning.
.. I am wrong ? OK ! Maybe you are wrong as well !
What we need to do is check what we have now and remove stuff (No, not remove even more apps, more prefs and more stuff we loved to use), what I mean is to make libraries faster, remove libraries with only 2 functions is used in the entire Desktop, remove libraries or move functions into other libraries and then remove them. We need to remove libraries where possible. A library that duplicate functions, a library that wraps stuff, a library that only has 2 functions is a pain in the butt. Not that the 2 functions would hurt, no the loading of these libraries do, specially under heavy load.
Has anyone ever made a real life test of GNOME or KDE under heavy load ? E.g. when copying big files across the harddisk. I wish someone would do this and come back talking then. Even when you think that HAL (implemented on the wrong end) is hell fast, then I must tell you that dealing with the library (loading) during heavy load IS NOT fast. It’s one of my argumentations besides the others provided by me earlier.
This is getting really boring.
You see, if you know everything oh so perfectly well, then
a) why don’t you tell the Freedesktop people about it. Apparently they’re so stupid that they could really need your help!
b) why don’t *you* do something? Why don’t *you* check out the source to one of those libraries you claim are so uberslow [1] and actually fix it instead of whining and generally getting on everybody’s nerve?
Not that the 2 functions would hurt, no the loading of these libraries do, specially under heavy load.
This might actually be valid criticism of the way dynamic library loading works on Linux-based systems. However, the solution is *not* axing libraries – that’s something you could do in a top-down mandated environment, i.e. a proprietary operating system. However, it will never work in an open source environment.
This doesn’t mean architectural improvements aren’t possible. For example, one could try to improve sharing of dynamic libraries between adress spaces/processes (i.e. basically what the kdeinit hack does, but on a system-wide level).
There are two ways I believe this can be done:
a) implement library loading in the kernel
b) implement library loading in a system-wide dynamic library daemon; whenever an application needs to load a library, it contacts that daemon, which will load the library (if it hasn’t already been loaded), relocates its symbols and provides a read-only mmap to the fully relocated library to the client application.
Has anyone ever made a real life test of GNOME or KDE under heavy load ? E.g. when copying big files across the harddisk.
Actually, yes, and interactive apps can come to a crawl when copying big files across partitions. That’s mainly a kernel/scheduler problem, though. You’ll notice that advanced console apps are affected by latency/responsiveness issues in that case, too.
[1] Yes, apparently there *are* some libraries using horribly inefficient algorithms out there. But in most cases, those libraries are able to survive because there’s no alternative – and it’s generally better to have *something* that works (even if it’s slow) than to have nothing at all.
well, if you don’t believe that 2.6 and 3.0 will fix the problems, then you do not understand what a huge Impact an O(1) scheduler has over an O(n) scheduler, as well as adding lowlatency and realtime fixes (the latter of which is tied into the number of spin locks in the kernel meaning the better the kernel can scale, the faster the kernel will get.)
most of the latency issues you have are from bad algorithms in teh device manager. 2.6 fixes many of them.
I think it’s become apparent that the current init system has serious speed problems and is quite hackish and overly complicated, and that we need to do a rewrite. Seth’s answer seems to be _exactly_ what it should be, which is greatly refreshing. Move the onus of “daemonhood” onto the apps, which can implement it in machine code.
I cannot believe that moving things out of relatively slow bash scripts and into a lower-level, compiled language wouldn’t make quite a bit of difference in speed.
-Erwos
is the greatest innovation of the freedesktop.org team and surely will shrink the accusations of them being gnome-only. i think every DE profits from a cleaned up init system.
>But you can see the logo with links in graphical mode, ever tried that?
Yes, I have. w3m too. I even have a screenshot of osnews running on all three of them. Remember, osnews runs great on these text mode browsers. Check our icon at the bottom of the page.
Speed?
Gnome 2.4 works just fine on my Celeron 733. And it works just fine on my friends P2-400. With the latest kernel sources (development ones) it’s equal responsively to Win (2k/XP/whatever). I think it’s actually better but that is subjective.
Havoc Pennington is a clever guy. Most Gnome devs are clever guys. I think, oGalaxyo, you are being rather pretentious with your attitude. They would not mandate something stupid. As for speed/efficiency impacts… where is it? I’m pretty sure it’s impact on footprint and speed will not be noticable at all.
WRT to the init replacement; this has been a long time coming. A Linux system should get into the GUI in <20s and the queued init system is one of the major reasons Linux systems take so long to boot into the login manager.
<blockquote>I cannot believe that moving things out of relatively slow bash scripts and into a lower-level, compiled language wouldn’t make quite a bit of difference in speed.</blockquote>It’s not bash or the scripts that are slow at all, it’s the fact that each init script spawns a program that has to do some work, and that work takes time (not all of which is CPU time). Parallelizing startup scripts, and allowing them to run in the background after the login prompt/XDM, would provide a *huge* savings in waiting time. But, so far nobody has come up with a bulletproof way to manage dependencies of startup scripts so that they can be parallelized and backgrounded.
> I think, oGalaxyo, you are being rather pretentious with your attitude.
He just doesn’t like Havoc, so he is smacking his effort at every chance he can.
First, let’s get one thing out of the way. Stop using the word “modern” to push your own particular view of how things should be. Microkernels are not “modern.” They are simply one way of organizing an OS.
Second, let the developers worry about speed concerns. There seems to be a misconception that more layers = slower speed. That’s wholly false. More layers are only slower if they are in a speed-critical path. Remember, 10% of the code takes up 90% of execuation time. Stuff that isn’t in the hot-path (like device detection!) doesn’t need to be blindingly fast. Also, remember that most of the layers you see in Linux GUIs (kernel, X11, toolkit, etc) are still present in MacOS X or Windows or even BeOS. You just don’t notice it because they are opaque, and not talked about seperately in the marketing literature. If you read the tech docs, you’ll find them all in there. Take a Windows vs Linux example:
HAL -> Not explicit in Linux, but implicit in each subsystem.
Executive -> Kernel
Win32 Server (crss.exe) -> No equivilent in Linux.
=========== kernelspace/userspace transition =============
X Server -> No equivilent in Windows.
GDI client library (gdi32) -> xlib
Common controls (comdlg32, commctrl) -> Qt or GTK.
Libraries like this one, along with FontConfig and Cairo, don’t layer on top of others. They sit off to the side, in that calls to other layers don’t go “through” them.Its like the layer of graphics APIs in OS X. OpenGL doesn’t go “through” Quickdraw anymore than Quickdraw goes “through” OpenGL.
he hangs out in #gnome on gimpnet and trolls. he also happens to think that gnome should follow his vision, rather than the gnome devs.
all i have to say to him is: Look here buddy. You don’t like it? Don’t use it. SystemServices is just another choice. HAL is another choice. YOU WONT BE FORCED TO!. Fool. It’s trolls like you that piss me off, and rob me of much OSNews.com joy.
Almost every post of yours can be reported as ‘trolling’.
Be thankful for your shared libraries. In Windows, people just static link things hoping to avoid DLL-hell. Seriously, though, your comments have no merit. If there are libraries with only two functions, then I’m in favor of axing them. Show me one! Each library carries its weight, and one thing the OSS world has figured out is that libraries exist so you don’t have to roll-your-own, but can use the code that already exists. Go find a Linux machine and do ldd <gui-app-name> on one of the binaries. If you find any libraries you think are extraneous, post them here and I’ll tell you why they’re not.
Also, as for usage under heavy load. I use a Linux machine as my primary desktop. It handles beautifully under high load. Right now, I’l copying a 10GB dir of MP3s, and compiling KOffice in the background. At this point, XP would be begging for mercy. GUI responsiveness hasn’t suffered at all, and the only thing you can notice is that apps take a little longer to load because they have to wait for pending I/O to complete. And this is with a (patched) 2.4 kernel, not even a 2.6 kernel. From my experience with 2.6, the new I/O scheduler greatly alleviates the second issue.
It is why we don’t have cool shadows or transparent menus in the main source trees. GNOME developers tend to only include things that are implemented in a sane and non-hackish way these days. To that end Havoc had written a paper defending the technical aspects of the HAL before implementation was even talked about. Now the implementation phase has begun. The next phase is acceptance into GNOME and other DE’s. If it isn’t up to par, if it becomes a performance bottleneck, it won’t get accepted. Numerous ideas are going through this route – DBUS, Storage, replacement Init system, etc. Some may never see the light of day while other with become standards. I don’t understand why people reject ideas outright. Comments like “I’ll stick to the CLI” are realy of no bearing. Who cares, these programs were not written for people who love the CLI. They will have no impact on people who use QNX, MorphOS, etc. In fact they have no impact on people who don’t like them because they can choose not to use them. That is the advantage of the layered approch. This is the advantage of Open Source.
—
J5
As much as I dislike Havoc’s view of user interfaces, I believe he is correct this time. ogalaxyo has criticized this proposal as being anothe unecessary lair piled onto other ones. In fact, what Havoc wants here is a kernel abstraction lair to provide a way for the DE to bypass Xlib through its own kernel service. This will (if done right) improve speed and make KDE and GNOME easier to port to other architectures.
“”1. Processors are getting faster. This allows for better software design. Sure, it may make the software bigger/larger, but it also makes the software better. “”
Nope, you’re wrong. There are already enough developers relying on Moore’s law to cover up sloppy, poorly designed, bloated, bug-ridden (More code = more bugs) code as it is. Having more processor resources available does automatically lead to better software design or better applications. In fact the opposite often appears to be true.
lol
Should read:
“processor resources available does _NOT_ automatically”
One word, big difference in meaning :>
I think there is a misunderstanding about the purpose of this HAL project.
As far as I understood from following the discussion on the mailinglist, HAL is for discovering devices (and device state changes), checking their capabilties and doing simple commands (like locking a CD drive bay, requesting unmount)
Any heavy work will be done directly through appropriate facilities (libraries, direct device access, etc.)
Of course most of it could be implemented at Kernel level and maybe the Linux implementation will do this sometimes.
But one has to define it as an extra layer or one looses portability of DEs to other platforms, like *BSD, commerical Unix, etc.
Nobody at freedesktop.org will sacrifice achievable portability just because some single.platform hack would be a tiny bit faster.
>“Be thankful for your shared libraries. In Windows, people just static link things hoping to avoid DLL-hell.”
Oh please, do really believe this?
>“Also, as for usage under heavy load. I use a Linux machine as my primary desktop. It handles beautifully under high load. Right now, I’l copying a 10GB dir of MP3s, and compiling KOffice in the background. At this point, XP would be begging for mercy. GUI responsiveness hasn’t suffered at all, and the only thing you can notice is that apps take a little longer to load because they have to wait for pending I/O to complete. And this is with a (patched) 2.4 kernel, not even a 2.6 kernel. From my experience with 2.6, the new I/O scheduler greatly alleviates the second issue.”
Your usually educated trolling is better than this.
1)No one has 10GB of MP3’s and even if you did why would you need to copy them to another dir at the same time your compiling koffice and browsing the web and bragging about it, just makes no sence.
2)Linux doesn’t increase your system memory bandwith, doesn’t makes your HD spin faster and doesn’t make your processor faster.
> he hangs out in #gnome on gimpnet and trolls…
I think you are getting quite unfair here. And your definition of trolling simply sucks as much as your entire comment to sum it up. I do hang out in #gnome on gimpnet – and this NOT since yesterday. I’m there for more than 3.5 years. What you call ‘trolling’ because the evil, evil galaxy doesn’t agree with something – is simple having an own opinion of something. I’m not blindly following every step they walk. I think that in all these years that I made good contributions around GNOME, be it as active member of a project, contributions and my own projects that are meant to help you people enjoying this Desktop. But it’s nothing compared to the contributions that others made.
I have my very own opinion and I raise them in hope that someone may realize that I may be as well right as the people on freedesktop that many others in here claim ALWAYS to be right regardless if they jump off of the balcony to leave the house or use the door which has been proven for decades of men being to be the better choice.
I’m not criticising the freedesktop people for being not clever, I only give constructive comments on a news forum why I (as individual person) think that this ‘solution’ is a bad thing. I’m not going to them and chop their heads off just because they have their own ideas.
I think that many of you readers here (if capable of course) need to calm down a lot here and stop calling people being ‘trolls’ just for god’s sake to have their own opinion. I’m not comming here to insult you people, I’m comming here to have normal conversation. If you call everyone a ‘troll’ who is not sharing the same opinion like Havoc Pennington or Seth Nickel then we soon have a serious big problem in this community. The problem of egoism and ignorance of other peoples opinion. In case none of you people realized it already, this is a general problem we in GNOME have trapped ourself into, the people FEAR to raise their opinion in the public or on mailinglist because they are being declared as ‘trolls’ and at the end those with strange ideas will always be the winner.
If you people think that they (the core devs) are doing everything the right way then why don’t they continue hacking GNOME on their own then and stop bothering people with having them contribute to their project if their aim is only to play the ‘milk cow’ that you can suck up. Like ‘hey thanks for your patch and bugfix and now go away’. GNOME is a community project, based upon teamwork and respect to everyone. Sadly none of you people realize this. It entertains me getting labeled as ‘troll’ from people who just were able to halfway setup and run GNOME – but haven’t done anything else for it so far.
Someone on GnomeDesktop just post this link, the conversation between the Kernel hacker and that guy re-inventing this HAL HACK. Now look what Greg KH replies to David Zeuthen all the time, scroll somewhere in the first 1/3 of the entire conversation.
http://sourceforge.net/mailarchive/forum.php?thread_id=3123686&foru…
Your usually educated trolling is better than this.
>>>>>>>>
What the hell is “educated trolling?” Either I’m right or I’m wrong.
No one has 10GB of MP3’s
>>>>>>>>
I do. That’s why I got a 15GB iPod, so I would have some room to grow. 10GB isn’t a whole lot. Ripped at high quality, that’s just 100 CDs. When you’re into music (and have similarly minded family members) you can accumulate that many in just a few years. And that’s not counting all the J-Pop I download of KaZaA because I’m too ashamed to buy it at the store
and even if you did why would you need to copy them to another dir at the same time your compiling koffice and browsing the web and bragging about it, just makes no sense:
>>>>>>>>>>>
Um, because oGaLaXyO asked how Linux desktops performed under heavy load, especially while copying large directories around?
2)Linux doesn’t increase your system memory bandwith, doesn’t makes your HD spin faster and doesn’t make your processor faster.
>>>>>>>>>>
No it doesn’t. The compile itself is a little faster, because ReiserFS performs better than NTFS for the small files that make up a source tree, but the copy is disk bandwidth limited and performs similarly in both OSs. *However* how the system feels under these conditions is incredibly OS dependent. You need an OS that is smart about scheduling processes and disk I/Os to get good feel under heavy load.
You (and Greg KH) are missing the point. Yes, this stuff might work fine in Linux right now (in fact, I know it does — hotplug auto-detects my iPod when I plug it in) but this is one-level higher than that. This is a standardized abstraction interface that you can use for all DEs on any platform. This is a choice quote from Greg KH:
“Yes, I know this only works on Linux, and you want to be
“cross-platform”. Might I suggest you get the other OSs that you want to support to also support what Linux already does, instead of trying to duplicate work?”
Between GNOME and KDE, you have a “free desktop” running on Linux, Windows, OS X, Solaris, OS/2, IRIX, AIX, OpenBSD, NetBSD, and FreeBSD, among others. Do you realize what sort of herculean feat would be involved in getting all these OSs to follow the Linux hotplug model??! The freedesktop.org standards are like X — OS agnostic. This library fits in perfectly with this model.
Look, I have an XP 1600 here, 256mb ram, an IBM 15gb Hard Disk. My entire System is compiled from sources and guanrantees a normal performance boost over the binaries that some distros provide by using the correct opcodes suited for the CPU rather than limiting to the i386 or i486 opcodes that some distros compile their stuff with.
This is minimal but overall (measured on the whole system) gives a huge performance boost and this with standard -O2 and athlon-xp arch.
Furthermore my Hardware operation at good state, I don’t encounter any IRQ conflicts my HardDisk is operating at 32bit DMA access UDMA 66. Running my entire System on 2.6.0-test6 as of now using the XFS from SGI. I think that this System is quite cool. Probably not the best but compared performant.
When you load up GNOME through GDM then in this moment only one single maintask is happening (symbolically) e.g. it boots your Kernel, start the init process, init process loads GDM and X, you login and you are presented with GNOME.
When you load one app, another app you then do single tasks as user. You run an application, you use it, you run another one etc. During all this time the CPU is mostly not used and the HardDisk operates at normal state and syncs its cache to the Device every now and then.
I must admit that I was highly impressed by the huge changes that 2.5.x (2.6.x) provided me after switching from 2.4.x some months ago and I noticed that the swapstorms disappeared, that the System became more responsive and and and. But Hardware is limited, there is a point where you can’t get out more. Specially not the Hardware one has at home and I doubt that people go out every day and update their System. It’s a dumb lie to say, if you want to use GNOME today then buy the newest junk of Hardware you can get. Not everyone is wiping the money on the street to do this.
Now the limitations:
Say you copy (not move, this will only change the node entries of the FS bitmap) a huge amount of Data from location A) to location B) and do normal operation in GNOME e.g. you want to listen to Sounds or open a GNOME-Terminal, then what happens. We know the new Linux Kernel is performant and is made to deal with resources perfectly, but as soon as it comes to dealing with physical limitations of the Hardware things start to cause problems. Copying files means it copies the physical entries of Sector XYZ to another location of the Harddisk what we realizes as scratching sometimes. Now the Harddisk is busy scratching the data from location A) to location B) and now you turn back to your GNOME desktop (scratching is still happening due to huge data transfer) and now start some GNOME applications, the app starts ld.so set’s in trying to load all the libraries required by that application and under GNOME it’s not uncommon that over 50-70 library dependencies are required per app. Simply run ‘ldd gnome-terminal’ for example it spits out what gnome-terminal alone requires, not to mention that this is only application to library dependency, the library to library dependency not measured. Say you also have a swap partition on the same hardware and run out of memory and then you scratch even more, GNOME becomes even less responsive, applications may crash and re-spawn every now and then and you end in a permanent scratching of the Harddisk because it has to heavily switch it’s position on the plates.
Of course you would answer that in a normal world everyone has a 7200rpm HardDisk primarily RAID based, no swapspace and 2gb of physical ram and 3ghz Pentium 4. Dream on this is most likely not the case.
Now from reading this article, from knowing stuff around GNOME myself, they have a lot of duplicate stuff only to stay Operating System independant e.g. some wrappers for functions that may be missing on BSD, some that may be missing on Linux, Solaris or Darwin (dunno the details yet)
These duplicate things or the dependency of ancient technology is what cause the drastical Speed losses. What we have now is a Linux Kernel (speaking of it now) which does hardware initialisation, which provides it’s devices through devfs, udev or normal nodes (the old way), you can’t control the hardware through a GUI system like GNOME because Linux wasn’t concepted that way, they used XFree86 to build the Desktop ontop of it because there wasn’t much alternative when they started years back. What do we have now ? A Kernel that doesn’t offer the ‘Hardware just works paradigm’ and we have an XFree86 System which is horrible bloated.
Now the GNOME people trying the fix these problems by a) forking XFree86, b) writing wrappers around the libraries so they run on all Systems, c) add layers over layers on exiting solutions e.g. replace Init, add HAL on a place where it doesn’t belong. All this is a signal (with respect to their authors) that there is something big wrong. Limitations they try to work around with bad solutions. Adding more complexity in libraries, development tools and stuff like this gains nothing and I ask myself it wouldn’t be better for GNOME to write a Kernel around their Desktop that provides them exactly this:
a) staying plattform independant (e.g. works on all Hardware),
b) implement their needs on the bottom layer of the Kernel which gains in total speed due to implemented right,
c) using a 2d/3d accelerated framebuffer for graphic layout to do their stuff.
This will put a lot of stuff towards the Kernel level and would make a few libraries totally disappear or ideas being done differently. And of course this is only theoretical thing that probably will not happen (little sidenote that there was a GNOME-OS Mailinglist some times back).
Right now we can expect that these hacks will make it into GNOME or KDE and I want to let you kow that Linux and Open Source has so what changed. These ideas are HUGE changes not of the Desktop even of the entire concept of the philosophy of Linux itself. There are not much choice either these days because (for people who are not blinded – and those using brain) everything has settled between GNOME and KDE, there is no room for choices anymore and all remains are talking and defending ideals and ideas. While I do agree that being able to control hardware on the Desktop on the one side, I also see the reality that these solutions are not really sane. How many libraries and duplicate new Layers do we actually need until we can use a nice Desktop ? Right now the amount of Libraries is overhelming and overkill. But I don’t expect that anyone here will understand these things. They only see fancy icons and pray the prayersbook of freedesktop to be everything right.
The reviewer writes:
“I’d probably still take the 1280×854, because I don’t really need everything to be teeny tiny.”
I always cringe when I read this. Any good OS should be able to exploit any current resolution screen plus any forecasted improvements. Currently the best laptop resolution is 1920×1200 (UXGA) at about 150 dpi and improvement won’t stop until it hits at least 300 dpi. Ultimately I want that 9.2 megapixel laptop to go with my 10 megapixel camera that I will have by that time.
Are you sure that the OS can’t be configured to make every single object (text, icons, checkboxes, etc.) appear normal size with screens up to 300 dpi? Windows XP isn’t too bad in its configurability, I’d be pretty disappointed if MacOS was worse. Redhat does a pretty good job too.
Get the highest resolution you can afford, there is no disadvantage other than price. (and the occasional quality control problem – Dell WUXGA screens were really messed up for a while it seemed).
Dara
There are some advantages to a tightly integrated kernel and desktop. They are *not* what most people think they are. Its not the number layers that affects the performance* but the inability to make hacks that make the system *seem* fast. Take BeOS for example. BeOS has never very fast. For most things, (compiling, untarring archives, etc) it has actually quite slow. However, the BeOS kernel was very well matched to the BeOS GUI. If there was a limitation in one component, a less general solution in another could make the problem go go away. WinXP provides a good example: In XP, when an application plays a sound, it gets a temporary priority boost. That’s not a very general solution, but it works. Now, a correct solution is much harder to design, but is fully general. Linux must be built on these fully general solutions, because the kernel and the DE projects aren’t tied together so closely. That’s something that you can’t avoid. To Linux, GNOME is just another C program, and to GNOME, Linux is just another UNIX. That’s the way it should be — that’s proper program design.
PS> On the performance issue — the only place all this layering is having an affect is in application startup speed. The correct solution is not the cop-out (to reduce the number of libraries, and thus developer convenience) but the smart one — to optimize library loading. It would be a big boost if Linux supported the kind of optimized program loading that debuted in Windows XP, where the kernel tracks the pages needed in the first X seconds of an apps execution, and preloads those. That way, all the libraries would be loaded in one go. Also, prelinking will go a long way in fixing startup time issues.
PS2> Which libraries are you complaining about. I don’t use GNOME, but I see a list of 41 libraries linked to Konqueror, and every single one is important.
What the hell is “educated trolling?” Either I’m right or I’m wrong.
You’re usually right about most stuff and I actually like reading your posts a lot (you and rajanr are great posters , but when it’s something about xfree or anything that involves responsiveness you tend to bend the truth, however this time you were pretty obvious: compiling koffice while browsing the web while uploadind 10gb of mp3 to your ipod and: “GUI responsiveness hasn’t suffered at all“??? What’s next? you also play quake2 while doing all that?
Well, if you have something specific, call me on it. However, I really was doing the copy and the compile at the time. Responsiveness didn’t suffer, other than the fact that apps took longer to start, which I mentioned. Opening menus, inputing text, scrolling text, moving the mouse, resizing windows, browsing websites, anything that doesn’t touch the hard-drive were all fine. Why should it? All of those things are CPU bound, not I/O bound. The compile process fluctuates between 20 and 80% CPU usage, and (like it should) the copy processes takes almost zero CPU. I tried this little experiment again, and this time started glxgears. As expected, I got about 1400 fps, or about half-my usual speed. With glxgears running, the CPU usage is 100%, and stuff like moving and resizing windows starts to lag, but its not noticible unless you go looking for it.
This is a heavily patched 2.4 kernel in Gentoo 1.4 with a 2Ghz P4 and 640MB of RAM. Try it yourself if you don’t believe me.
Now look what Greg KH replies to David Zeuthen all the time, scroll somewhere in the first 1/3 of the entire conversation.
Actually, if you read the whole thread, you will notice that Greg likes the idea after Havoc and David clarify some details.
And David really appreciates the capabilties of the new Kernel as it makes the Linux implementation of HAL a lot easier, avoiding unnecessary duplication.