The Big freedesktop.org Interview

Today we are very happy to publish a very interesting Q&A with major freedesktop.org members: the founder Havoc Pennington (also of Debian, Gnome and Red Hat fame), Waldo Bastian (of SuSE & KDE fame), Keith Packard and Jim Gettys (of X/XFree86/fontconfig/w3c fame) and David Zeuthen, a new member who’s taking over the ambitious HAL project. In the article, we discuss about general freedesktop.org goals, status and issues, the role of KDE/Qt in the road to interoperability with Gnome/GTK+, HAL (with new screenshots), the new X Server aiming to replace XFree86 and we even have an exclusive preliminary screenshot of a version of Mac OS X’s Exposé window management feature for this new X Server! This is one article not to be missed if you are into Unix/Linux desktop!

Rayiner Hashem: In your presentation at Nove Hrady, you point out that drag and drop still doesn’t work as it should, mainly because of poor implementations. Are there plans for a drag-and-drop library to ease the implementation of XDND functionality?


Havoc Pennington Havoc Pennington: The issue isn’t poor implementation in the libraries, it’s simpler than that. When you add drag and drop to an application you have a list of
types that you support dragging or dropping, such as “text/plain”. Applications simply don’t agree on what these types are.


So we need a registry of types documenting the type name and the format of the data transferred under that name. That’s it.


The starting point is to go through GNOME, KDE, Mozilla, OpenOffice.org, etc. source code and document what types are already used.


The other issue requires even less explanation: application authors don’t support DND in enough places.


Rayiner Hashem: Most of the examples listed in your Nove Hrady presentation were desktop level. Yet, you mentioned GTK+ 3 and Qt 4 as well. Do you think more
interoperation at the toolkit level is necessary? What form would this interoperation take?


Havoc Pennington: I don’t really think of freedesktop.org as an interoperability effort anymore. Rather, it’s an effort to build a base desktop platform that desktops can build on.


Many of the things on freedesktop.org would be implemented in or used by the toolkit: icon themes, XEMBED, X server, Cairo, startup notification, and so forth.


Rayiner Hashem: From your experience with GTK+, what do you think are some of the properties that make it hard to write fast applications for X11? What would
you like to see the server do to make it easier to write fast client applications?


Havoc Pennington: Talking only about speed (fast vs. slow) won’t lead to understanding of the problem. Graphics will look bad if you have flicker, tearing, OR
slowness. Most users will perceive all of those problems as “slowness.”


Eliminating the round trip to clients to handle expose events will probably be a huge improvement in terms of both flicker and speed. The proposed Composite extension also allows double buffering the entire screen, which should let us fix virtually all flicker and tearing
issues.


Some clients right now do things that are just stupid; for example, allocating huge pixmaps (image buffers) and keeping them around; improved profiling tools should help track down and fix these mistakes.


Eugenia Loli-Queru: Is freedesktop.org working towards a package management standard? While RPM and DEBs are well known, what is your opinion on autopackage.org?


Havoc Pennington: I don’t really understand the motivation for autopackage. At their core, RPM and DEB are a tarball-like archive with some metadata. You can
always just ignore the metadata, or add additional/different metadata.


For example, file dependencies; if you don’t want your RPM package to have file dependencies, you don’t have to include any.


I would tend to focus more on the question of which metadata we should have, and how it should be used by the installer UI.


autopackage tries to solve the problem that distributions use different packaging systems by creating an additional packaging system and using it in addition to the native one. However, you could just as easily pick one of the current systems (RPM, etc.) and use it on any distribution. RPM works fine on Solaris for example. I don’t see how autopackage uniquely enables portability.


In short, to me the issues with software installation are not related to the on-disk format for archiving the binaries and metadata. I think autopackage could achieve much more by building stuff _around_ RPM and/or DEB rather than reinventing the archive wheel.


I haven’t looked at autopackage in detail though, and I could be totally wrong.


Eugenia Loli-Queru: How do you feel about freedesktop.org becoming an “umbrella” project for all projects that require communication (e.g. if X requires a kernel
extension, freedesktop.org makes sure that the X group is heard from the kernel group and manages the implementation)


Havoc Pennington: Ultimately freedesktop.org can’t make sure of anything; it’s not an enforcement agency. What it can do is provide a forum that’s well-known
where people can go and find the right developers to talk to about a particular issue.


Implementation will really always come from the work and leadership of the developers who put in hours to make things happen.


Eugenia Loli-Queru: How do you grade the support of commercial entities towards freedesktop.org? Is IBM, Novell, Red Hat and other big Linux players helping out the cause?


Havoc Pennington: Individual developers from all those companies are involved, but there’s no framework for corporations to get involved as corporations.
I’m happy overall that the right people are involved. But of course I’d always like to see more developers added to the Linux desktop effort.


Eugenia Loli-Queru: In the plans of freedesktop.org do we only find interoperation resolutions between DEs or innovation is part of the plan? For example, would
freedesktop.org welcome Seth Nickell’s Storage or ‘System Services’ projects which are pretty “out of the ordinary” kind of projects?


Havoc Pennington at a KDE conference Havoc Pennington: I’d like to see more work originate at freedesktop.org, and certainly we’d be willing to host Seth’s work. Ultimately though any new
innovation has to be adopted by the desktops such as GNOME and KDE, and the distributions, to become a de facto reality. freedesktop.org may be the forum where those groups meet to agree on what to do, but freedesktop.org doesn’t have a “mind of its own” so much.


Eugenia Loli-Queru: In your opinion, which is the hardest step to take in the road ahead for full interoperability between DEs? How far are we from the realization of
this step?


Havoc Pennington: I think the “URI namespace” or “virtual file system” issue is the ugliest problem right now. It bleeds into other things, such as MIME
associations and WinFS-like functionality. It’s technically very challenging to resolve this issue, and the impact of leaving it unresolved is fairly high. Here are some links on that here, here and here.


Eugenia Loli-Queru: On Mac OS X, users who require extra accessibility can listen to text from pretty much any application via text-to-speech, as it is supported in the toolkit level. Are there any plans on creating a unified way where all applications (Qt or GTK+) would be able to offer this functionality from a common library? What would be the best way to go about it? What accessibility projects you would like to see produced at freedesktop.org?


Havoc Pennington: This is already supported with ATK and the rest of the GNOME accessibility implementation, you can text-to-speech any text displayed via GTK+ today. I believe there’s a plan for how to integrate Qt and Java into the same framework, but I’m not sure what the latest details are. This is looking like an interoperability success already, as everyone does appear to be using the same framework.


Eugenia Loli-Queru: I haven’t found an obvious way to get Abiword, Gaim, Epiphany (granted, with the mozilla toolkit, but still one of the apps that begs for such accessibility feature) or Gedit to read any texts… How is this done then? Is it a compilation/link option? If yes, the problem is not really solved if it is not transparent to the user and if not get done automatically after a compilation.


Havoc Pennington: I haven’t ever tried it out, but I’ve seen the a11y guys demo it. The toolkit functionality is definitely there, in any case. I think you go to Preferences -> Assistive Technology Support and check the screenreader box, but I don’t know if it works out of box on any distributions yet. It’s a new feature. (editor’s note: “screenreader” on Fedora is greyed out, while on the latest Slackware can be selected, but it is later dumbed “unavailable” and so it doesn’t work yet out of the box for most distros).

Rayiner Hashem: Computer graphics systems have changed a lot since X11 was designed. In particular, more functionality has moved from the CPU to the graphics
processor, and the performance characteristics of the system have changed drastically because of that. How do you think the current protocol has coped with these changes?


Keith Packard Keith Packard: X has been targeted at systems with high performance graphics processors for
a long time. SGI was one of the first members of the MIT X consortium and
shipped X11 on machines of that era (1988). Those machines looked a lot
like todays PCs — fast processors, faster graphics chips and a relatively
slow interconnect. The streaming nature of the X protocol provides for easy
optimizations that decouple graphics engine execution from protocol decoding.


And, as a window system X has done remarkably well; the open source nature
of the project permitted some friendly competition during early X11
development that improved the performance of basic windowing operations
(moving, resizing, creating, etc) so that they were more limited by the
graphics processor and less by the CPU. As performance has shifted towards
faster graphics processors, this has allowed the overall system performance
to scale along with those.


Where X has not done nearly as well is in following the lead of application
developers. When just getting pixels on the screen was a major endeavor, X
offered a reasonable match for application expectations. But, with machine
performance now permitting serious eye-candy, the window system has not
expanded to link application requirements with graphics card capabilities.
This has left X looking dated and shabby as applications either restrict
themselves to the capabilities of the core protocol or work around
these limitations by performing more and more rendering with the CPU in the
application’s address space.


Extended the core protocol with new rendering systems (like OpenGL and
Render) allows applications to connect to the vast performance offered by
the graphics card. The trick now will be to make them both pervasive
(especially OpenGL) and hardware accelerated (or at least optimize the
software implementation).


Rayiner Hashem: Jim Gettys mentioned in one of your presentations that a major change from W to X was a switch from structured to immediate mode graphics.
> However, the recent push towards vector graphics seems to indicate a return of structured graphics systems. DisplayPDF and XAML, in particular, seem particularly well-suited to a structured API. Do you see the X protocol evolving (either directly or through extensions) to better support structured graphics?


Keith Packard: So far, immediate mode graphics seem to provide the performance and
capabilities necessary for modern graphics. We’ve already been through a
structured-vs-immediate graphics war in X when PHIGS lost out to OpenGL.
That taught us all some important lessons and we’ll have to see some
compelling evidence to counter those painful scars. Immediate graphics
are always going to be needed by applications which don’t fit the structured
model well, so the key is to make sure those are fast enough to avoid the
need to introduce a huge new pile of mechanism just for a few applications
which might run marginally faster.


Rayiner Hashem: What impact does the compositing abilities of the new X server have on memory usage? Are there any plans to implement a compression mechanism for idle window buffers to reduce the requirements?


Keith Packard: Oh, it’s pretty harsh. Every top level window has its complete contents
stored within the server while mapped, plus there are additional temporary
buffers needed to double-buffer screen updates.


If memory does become an issue, there are several possible directions to
explore:


    + Limit saved window contents to those within the screen boundary,
this will avoid huge memory usage for unusualy large windows.


    + Discard idle window buffers, reallocating them when needed and
causing some display artifacts. Note that ‘idle’ doesn’t just mean
‘not being drawn to’, as overlying translucent effects require
saved window contents to repaint them, so the number of truely idle
windows in the system may be too small to justify any effort here.


    + Turning off redirection when memory is tight. One of the features
about building all of this mechanism on top of a window system which
does provide for direct-to-screen window display is that we can
automatically revert to that mode where necessary and keep running,
albeit with limited eye-candy.


One thing I have noticed is a sudden interest in video cards with *lots* of
memory. GL uses video memory mostly for simple things like textures for
which it is feasible to use AGP memory. However, Composite is busy drawing
to those off-screen areas, and it really won’t work well to try and move
those objects into AGP space. My current laptop used to have plenty of
video memory (4meg), but now I’m constantly thrashing things in and out of
that space trying to keep the display updated.


Expose on X
Preliminary Exposé-like functionality on the new X Server
(530 KB .png, faster loading 240 KB .jpg here)


Rayiner Hashem: What impact does the design of the new server have on performance? The new X server is different from Apple’s implementation because the server still does all the drawing, while in Apple’s system, the clients draw directly to the window buffers. Do you see this becoming a bottleneck, especially with complex vector graphics like those provided by Cairo? Could this actually be a performance advantage, allowing the X server to take advantage of hardware acceleration in places Apple’s implementation can not?


Keith Packard: I don’t think there’s that much fundamental difference between X and the OS
X window system. I’m pretty sure OS X rendering is hardware accelerated
using a mechanism similar to the DRI. Without that, it would be really
slow. Having the clients hit the hardware directly or having the X server
do it for them doesn’t change the fundamental performance properties of the
system.


Where there is a difference is that X now uses an external compositing agent
to bring the various elements of the screen together for presentation, this
should provide for some very interesting possibilities in the future, but
does involve another context switch for each screen update. This will
introduce some additional latency, but the kernel folks keep making context
switches faster, so the hope that it’ll be fast enough. It’s really
important to keep in mind that this architecture is purely experimental in
many ways; it’s a very simple system that offers tremendous potential. If
we can make it work, we’ll be a long ways ahead of existing and planned
systems in other environments.


Because screen updates are periodic and not driven directly by graphics
operations, the overhead of compositing the screen is essentially fixed.
Performance of the system perceived by applications should be largely
unchanged by the introduction of the composting agent. Latency between
application action and the eventual presentation on the screen is the key,
and making sure that all of the graphics operations necessary for that are
as fast as possible seems like the best way to keep the system responsive.


Eugenia Loli-Queru: How is your implementation compares to that of Longhorn’s new display system (based on available information so far)?


Keith Packard: As far as I can tell, Longhorn steals their architecture from OS X, DRI-like
rendering by applications (which Windows has had for quite some time) and
built-in window compositing rules to construct the final image.


Rayiner Hashem: What impact will the new server have on toolkits? Will they have to change to better take advantage of the performance characteristics of the new
design? In particular, should things like double-buffering be removed?


There shouldn’t be any changes required within toolkits, but the hope is
that enabling synchronous screen updates will encourage toolkit and window
manager developers to come up with some mechanism to cooperate so that the
current opaque resize mess can be eliminated.


Double buffering is a harder problem. While it’s true that window contents
are buffered off-screen, those contents can be called upon at any time to
reconstruct areas of the screen affected by window manipulation or
overlaying translucency. This means that applications can’t be assured that
their window contents won’t be displayed at any time. So, with the current
naïve implementation, double buffering is still needed to avoid transient
display of partially constructed window contents. Perhaps some mechanism
for synchronizing updates across overlaying windows can avoid some of this
extraneous data movement in the future.


Rayiner Hashem: How are hardware implementations of Render and Cairo progressing? Render, in particular, has been available for a very long time, yet most hardware has poor to no support for it. According to the benchmarks done by Carsten Haitzler (Raster) even NVIDIA’s implementation is many times slower in the general case than a tuned software implementation. Do you think that existing APIs like OpenGL could form a foundation for making fast Render and Cairo implementations available more quickly?


Keith Packard: Cairo is just a graphics API and relies on an underlying graphics engine to
perform the rendering operations. Back-ends for Render and GL have been
written along with the built-in software fall-back. Right now, the GL
back-end is many times faster than the Render one on existing X servers
because of the lack of Render acceleration.


Getting better Render acceleration into drivers has been slowed by the lack
of application demand for that functionality. With the introduction of
cairo as a complete 2D graphics library based on Render, the hope is that
application developers will start demanding better performance which should
drive X server developers to get things mapped directly to the hardware for
cases where GL isn’t available or appropriate.


Similarly, while a Composite-based environment could be implemented strictly
with core graphics, it becomes much more interesting when image composition
can be used as a part of the screen presentation. This is already driving
development of minimal Render acceleration within the X server project at
Freedesktop.org, I expect we’ll see the first servers with acceleration
matching what the sample compositing manager uses available from CVS in
the next couple of weeks.


A faster software implementations of Render would also be good to see. The
current code was written to complete the Render specification without a huge
focus on performance. Doing that is mostly a matter of sitting down and
figuring out which cases need acceleration and typing the appropriate code
into the X server. However, Render was really designed for hardware
acceleration; acceleration which should be able to outpace any software
implementation by a wide margin.


In addition, there has been a bit of talk on the [email protected]
mailing list about how to restructure the GL environment to make the X
server rely upon GL acceleration capabilities rather than having it’s own
acceleration code. For environments with efficient GL implementations,
X-specific acceleration code is redundant. That discussion is very nebulous
at this point, but it’s certainly a promising direction for development.

Rayiner Hashem: Computer graphics systems have changed a lot since X11 was designed. In particular, more functionality has moved from the CPU to the graphics
processor, and the performance characteristics of the system have changed drastically because of that. How do you think the current protocol has coped with these changes?


Jim Gettys Jim Gettys: This is not true. The first X implementation had a $20,000 external display plugged into a Unibus on a VAX with outboard processor and
bit-blit engine. Within 3 years, we went to completely dumb frame buffers.


Over X’s life time, the cycle of reincarnation has turned several times, round and round the wheel turns. The tradeoffs of hardware vs. software go back and forth.


As far as X’s graphics goes, X mouldered most of the decade of the ’90’s, and X11’s graphics was arguably broken on day 1. The specification adopted forced both ugly and slow wide lines; we had run the “lumpy line” problem that John Hobby had solved, but unfortunately, we were not aware of it in time and X was never fixed. AA and image compositing were just gleams in people’s eyes when we designed X11. Arguably, X11’s graphics has always been lame.


It is only Keith Packard’s work recently that has begun to bring it to where it needs to be.


Rob Pike and Russ Cox’s work on Plan 9 showed that adopting a Porter-Duff model of image compositing was now feasible. Having machines 100-1000x faster than what we had in 1986 helps a lot :-).


Overall, the current protocol has done well, as demonstrated by Gnome and KDE’s development over 10 years after X11’s design, though it has been past to replace the core graphics in X, which is what Render does.


Rayiner Hashem: You mentioned in one of your presentations that a major change from W to X was a switch from structured to immediate mode graphics. However, the recent push towards vector graphics seems to indicate a return of structured graphics systems. Display PDF and XAML, in particular, seem particularly well-suited to a structured API. Do you see the X protocol evolving (either directly or through extensions) to better support structured graphics?


Jim Gettys: That doesn’t mean that the window system should adopt structured graphics.


Generally, having the window system do structured graphics requires a duplication of data structures on the X server, using lots of memory and costing performance. The
organization of the display lists would almost always be incorrect for any serious application. No matter what you do, you need to let the application do what *it* wants, and it generally has a better idea how to represent its data that the window system can possibly have.


Rayiner Hashem: What impact does the compositing abilities of the new X server have on memory usage? Are there any plans to implement a compression mechanism for idle window buffers to reduce the requirements?


Jim Gettys: The jury is out: one idea we’ve toyed with is to encourage most applications to use 16bit deep windows as much as possible. This might often save memory over the current situation where windows are typically the depth of the screen (32 bits). The equation is complex, and not all for or against either the existing or new approach.


Anyone who wants to do a compression scheme of idle window buffers is very welcome to do so. Most windows compress *extremely* well. Some recent work on the migration of window contents to and from the display memory should make this much easier, if someone wants to implement this and see how well it works.


Rayiner Hashem: What impact does the design of the new server have on performance? The new X server is different from Apple’s implementation because the server still does all the drawing, while in Apple’s system, the clients draw directly to the window buffers. Do you see this becoming a bottleneck, especially with complex vector graphics like those provided by Cairo?


Jim Gettys: No, we don’t see this as a bottleneck.


One of the *really* nice things about the approach that has been taken is that your eye candy’s (drop shadows, etc) cost is bounded by update rate to the screen, which never needs to be higher than the frame rate (and is typically further reduced by only having to update the parts of the screen that have been modified). Other approaches often have the cost going up proportional to the graphics updating, rather than the bounded behavior of this design, and take a constant fraction of your graphics performance,


Rayiner Hashem: Could this actually be a performance advantage, allowing the X server to take advantage of hardware acceleration in places Apple’s implementation can not?


Jim Gettys: Without knowing Apple’s implementation details it is impossible to tell.


Eugenia Loli-Queru: How is your implementation compares to that of Longhorn’s new display system (based on available information so far)?


Jim Gettys: Too soon to tell. The X implementation is very new, and it is hard enough to keep up with what we’re doing, much less keep up with the
smoke and mirrors of Microsoft marketing ;-). Particularly sweet is that Keith says the new facilities saves code in the X server, rather than making it larger. That is always a good sign :-).


Rayiner Hashem: What impact will the new server have on toolkits?


Jim Gettys: None, unless they want to take advantage of similar compositing facilities internally.


Rayiner Hashem: Will they have to change to better take advantage of the performance characteristics of the new design? In particular, should things like double-buffering be removed?


Jim Gettys: If we provide some way for toolkits to mark stable points in their display, it may be less necessary for applications to ask explicitly for double buffering. We’re still exploring this area.


But current Qt and GTK, and Mozilla toolkits need some serious tuning independent of the X server implementation. See our USENIX paper found here. Some of the worst problems have been fixed since this work was done last spring, but there is much more to do.


Rayiner Hashem: How are hardware implementations of Render and Cairo progressing? Render, in particular, has been available for a very long time, yet most hardware has poor to no support for it. According to the benchmarks done by Carsten Haitzler (Raster) even NVIDIA’s implementation is many times slower in the general case than a tuned software implementation.


Jim Gettys: Without understanding exactly what Raster thinks he’s measured, it is hard
to tell.


We need better driver support (more along the lines of DRI drivers) to allow the graphics hardware to draw into pixmaps in the X server to take advantage of their compositing hardware.


Some recent work allows for much easier migration of pixmaps to and from the frame buffer where the graphics accelerators can operate.


An early implementation Keith did showed a factor of 30 for hardware assist for image compositing, but it isn’t clear if the current software implementation is as optimal as it could be, so that number should be taken with a grain of salt. But fundamentally, the graphics engines have a lot more bandwidth and wires into VRAM than the CPU does into main memory.


Rayiner Hashem: Do you think that existing APIs like OpenGL could form a foundation for making fast Render and Cairo implementations available more quickly?


Jim Gettys: Understand that today’s X applications draw fundamentally differently than your parent’s X applications; we’ve found that a much simpler and narrower driver interface is sufficient for 2D graphics: 3D remains hard. The wide XFree86 driver interface is optimizing many graphics requests no longer used
by current GTK, Qt or Mozilla applications. For example, core text is now almost entirely unused: I now use only a single application that still uses the old core text primitives; everything else is AA text displayed by Render.


So to answer your question directly, yes we think that this approach will form a foundation for making fast Render and Cairo implementations.


The fully software based implementations we have now are fast enough for most applications, and will be with us for quite a while due to X’s use on embedded
platforms such as handhelds that lack hardware assist for compositing.


But we expect high performance implementations using graphics accelerators will be running over the next 6 months. The proof will be in the
pudding, now in the oven. Stay tuned :-).

David Zeuthen David Zeuthen: First of all it might be good to give an overview of the direction HAL (“Hardware Abstraction Layer”) is going post the 0.1 release since a few key things have changed.


One major change is that HAL will not (initially at least, if ever) go into device configuration such as mounting a disk or loading a kernel driver.


Features like this really belong in separate subsystems. Having said that, HAL will certainly be useful when writing such things. For instance a volume manager, as proposed by Carlos Perelló Marín on the xdg-list, should (excluding the optical drive parts) be straightforward to write insofar that such a program will just listen for D-BUS events from the HAL daemon when storage devices are added/removed, and mount/unmount them.


Finally, the need for Free Device Information files (.fdi files) won’t be that big initially since most of the smart busses (USB, PCI) provide device class information that we can map to HAL device capabilities. However, some devices (like my Canon Digital IXUS v camera) just report the class / interface as proprietary so it is needed.


There are a lot of other reasons for supplying .fdi files though. First of all some capabilities of a device that DE’s are interested are hard/impossible to guess. For example, people should be able to use a digital camera and mp3 player as a storage device as many people already do. Second, having .fdi files gives the opportunity to fine tune the
names of the device and maybe even localize it into many languages. Third, we can advertise certain known bugs or deficiencies in the device for the libraries/servers using the device.


Rayiner Hashem: HAL seems to overlap in a lot of ways existing mechanisms like hotplug and kudzu. Will HAL interoperate with these projects or replace them entirely?


David Zeuthen: HAL might replace kudzu one day when we get more into device configuration. In the mean time both mechanisms can peacefully coexist.


For linux-hotplug, and udev for that matter, I’d say the goal is definately to interoperate for a number of reasons; first of all linux-hotplug is already widely deployed and it works pretty well; second it may not be in a vendors best interest to deploy HAL on an embedded device (though HAL will be lean and only depend on D-BUS)
because of resource issues. Finally, it’s too early for HAL to go into device configuration as noted above.


Rayiner Hashem: HAL is seperate from the underlying kernel mechanisms that handle the actual device management. Is there a chance, then, that information could get out of sync, with HAL having one hardware list and the kernel having another? If so, are there any mechanisms in place that would prevent this from happening, or allow the user to fix things manually?


David Zeuthen: There is always the possibility of this happening, but with the current design I’d say that the changes are slim. Upon invocation of the HAL
daemon all busses are probed (via a kernel interface) and devices are removed/added as appropriate using the linux-hotplug facilities.


There will be a set of tools shipped with HAL; one of them will wipe the entire device list and reprobe the devices. I do hope this will never be needed though 🙂


Eugenia Loli-Queru: Gnome/KDE are multiplatform DEs, but HAL for now is pretty tied to Linux. If HAL is to be part of Gnome/KDE, how easy/difficult would be to port it on BSDs or other Unices?


David Zeuthen: With the new architecture most of the HAL parts are OS agnostic; specifically the only Linux-specific parts are less than 2000 lines of C code for handling USB and PCI devices using the kernel 2.6 sysfs interface. It will probably
grow to 3-4k LOC when block devices are supported.


The insulation from the OS is important, not only for supporting FreeBSD, Solaris and other UNIX and UNIX-like systems, but more importantly, it allows OS’es that said DE’s run on to make drastic changes without affecting the DE’s. So, maybe we won’t get FreeBSD support for the next release of HAL, but anyone is able to add it when
they feel like it.


I’d like to add a few things on the road map for HAL. The next release (due in a few weeks give and take), will be quite simple insofar that it basically just gives a list of devices. It will also require Linux Kernel 2.6 which may be a problem for some people (but they are free to write the Linux 2.4 parts; I already got USB support for 2.4)..


Part of the release will also feature a GUI “Device Manager” to show the devices. Work-in-progress screenshots are here.


Post 0.2 (or 0.3 when it’s stable) I think it will be time to look into integrating HAL into existing device libraries such that programmers can basically just throw a HAL object and get the library to do the stuff; this will of course require buy-in from such projects as it adds D-BUS and, maybe, HAL as a dependency. Work on a volume manager will also be possible post 0.2.


It may be pretentious, but in time I’d also like to see existing display and audio servers use HAL. For instance, an X server could get the list of graphic cards (and monitors) from HAL and store settings in properties under it’s own namespace (FDOXserver.width etc.). This way it will be a lot easier to write configuration tools, especially since
D-BUS sports Python bindings instead of editing an arcane XFree86Config file.


There is a lot of crazy, and not so crazy, ideas we can start to explore when the basics are working: Security (only daddy can use daddy’s camera), Per-user settings (could store name of camera for display in GNOME/KDE), Network Transparency (plug an USB device into your X-terminal and use it on the computing server you connect to).


The Fedora developers are also looking into creating a hardware web site, see here so the device manager could find .fdi files this way (of course this must be done a distro/OS independent way).

Rayiner Hashem: How is KDE’s involvement in the freedesktop.org project?


Waldo Bastian: It could always be better, but I think there is a healthy interest in what freedesktop.org is doing, and with time that interest seems to be growing.


Rayiner Hashem: While it seems that there has been significant support for some things (the NETWM spec) there also seems to be a lot of friction in other places. This is particularly evident for things like the accessibility framework or glib that have a long GNOME history.


Waldo Bastian Waldo Bastian: I don’t see the friction actually. KDE is not thrilled to use glib but nobody at freedesktop.org is pushing glib. It has been considered to use it for some things at some point and the conclusion was that that wouldn’t be a good idea. The accessibility framework is a whole different story. KDE is working closely with Bill Haneman to get GNOME compatible accessibility support in KDE 4. Things are moving still a bit slow from our side, in part because we need to wait on Qt4 to get some of the needed support, but the future looks very good on that. TrollTech has made accessibility support a critical feature for the Qt4 release so we are very happy with their commitment to this. We will hopefully be able to show some demos in the near future.


Rayiner Hashem: What are the prospects for D-BUS on KDE? D-BUS overlaps a great deal with DCOP, but there seems to be a lot of resistance to the idea of replacing DCOP with D-BUS. If DCOP is not replacing D-BUS, are there any technical reasons you feel DCOP is better?


Waldo Bastian: D-BUS is pretty much inspired by DCOP and being able to replace DCOP with D-BUS is one of the design goals of D-BUS. Of course we need to look carefully how to integrate D-BUS in KDE, it will be a rather big change so it’s not something we are going to do in the KDE 3.x series. That said, with KDE 3.2
heading for release early next year, we will start talking more and more about KDE 4 and KDE 4 will be a good point to switch to D-BUS. Even though KDE 4 is a major release, it will still be important to keep compatibility with DCOP as much as possible, so that’s something that will need a lot of attention.


Rayiner Hashem: What do you think of Havoc Pennington’s idea to subsume more things into freedesktop.org like a unified MIME associations and a VFS framework? What inpact do you think KDE technologies like KIO will have in design of the resulting framework?


Waldo Bastian: I think those ideas are spot on. The unified MIME associations didn’t make it in time for KDE 3.2, but I hope to get that implemented in the next KDE release. Sharing a VFS framework will be somewhat more difficult Since the functionality that KIO offers is quite complex it may not really be feasible
to fold that all in a common layer. What would be feasible is to take a basic subset of functionality common to both VFS and KIO and standardize an interface for that. The goal would then be to give applications the possibility to fall-back to the other technology with some degradation of service in case a specific scheme (e.g. http, ftp, ldap) is not available via the native framework. That would also be useful for third party applications that do not want to link against VFS or KIO.


Rayiner Hashem: A lot of the issues with the performance of X11 GUIs has been tracked down to applications that don’t properly use X. We’ve heard a lot about
what applications should do to increase the performance of the system (handling expose events better, etc). From the KDE side, what do you think the X server should do to make it easier to write fast applications?


Waldo Bastian: “Fast applications” is always a bit difficult term. Everyone wants fast applications but it’s not always clear what it means in technical terms.
Delays or lag in rendering is often perceived as “slow” and a more agressive approach to buffering in the server can help a lot in that area.


I myself noticed that the server-side font-handling tends to cause slow-down in the startup of KDE applications. Xft should have brought improvements there, although I haven’t looked into that recently.


Other KDE developers may have better examples.


Eugenia Loli-Queru: If QT changes are required to confront with changes needed for interoperation with GTK+ or Java or other toolkits, is TrollTech keen on
complying? If KDE developers do the work required is TrollTech keen on applying these patches on their default X11 tree?


Waldo Bastian: TrollTech is overall quite responsive to patches, whatever their nature, but in some cases it takes a bit longer than we would like to get them into a Qt release. That said, we have the same problem in KDE where we sometimes have patches sitting in our bug-database that take quite long before they get applied (Sorry BR62425!)

73 Comments

  1. 2003-11-24 4:34 pm
  2. 2003-11-24 5:07 pm
  3. 2003-11-24 5:07 pm
  4. 2003-11-24 5:21 pm
  5. 2003-11-24 5:24 pm
  6. 2003-11-24 5:32 pm
  7. 2003-11-24 5:36 pm
  8. 2003-11-24 5:38 pm
  9. 2003-11-24 5:47 pm
  10. 2003-11-24 5:47 pm
  11. 2003-11-24 5:55 pm
  12. 2003-11-24 5:58 pm
  13. 2003-11-24 5:59 pm
  14. 2003-11-24 6:37 pm
  15. 2003-11-24 6:39 pm
  16. 2003-11-24 6:56 pm
  17. 2003-11-24 6:59 pm
  18. 2003-11-24 7:03 pm
  19. 2003-11-24 7:13 pm
  20. 2003-11-24 7:22 pm
  21. 2003-11-24 7:26 pm
  22. 2003-11-24 7:26 pm
  23. 2003-11-24 7:29 pm
  24. 2003-11-24 7:29 pm
  25. 2003-11-24 7:32 pm
  26. 2003-11-24 7:36 pm
  27. 2003-11-24 7:39 pm
  28. 2003-11-24 7:39 pm
  29. 2003-11-24 7:40 pm
  30. 2003-11-24 7:42 pm
  31. 2003-11-24 7:44 pm
  32. 2003-11-24 7:50 pm
  33. 2003-11-24 8:06 pm
  34. 2003-11-24 8:17 pm
  35. 2003-11-24 8:47 pm
  36. 2003-11-24 8:58 pm
  37. 2003-11-24 9:13 pm
  38. 2003-11-24 9:45 pm
  39. 2003-11-24 10:16 pm
  40. 2003-11-24 10:16 pm
  41. 2003-11-24 10:31 pm
  42. 2003-11-24 11:05 pm
  43. 2003-11-24 11:07 pm
  44. 2003-11-24 11:08 pm
  45. 2003-11-24 11:17 pm
  46. 2003-11-24 11:47 pm
  47. 2003-11-24 11:52 pm
  48. 2003-11-24 11:59 pm
  49. 2003-11-25 12:23 am
  50. 2003-11-25 12:36 am
  51. 2003-11-25 12:39 am
  52. 2003-11-25 12:58 am
  53. 2003-11-25 1:03 am
  54. 2003-11-25 1:27 am
  55. 2003-11-25 1:49 am
  56. 2003-11-25 1:56 am
  57. 2003-11-25 2:12 am
  58. 2003-11-25 2:24 am
  59. 2003-11-25 2:40 am
  60. 2003-11-25 3:05 am
  61. 2003-11-25 3:06 am
  62. 2003-11-25 3:19 am
  63. 2003-11-25 3:53 am
  64. 2003-11-25 4:12 am
  65. 2003-11-25 8:30 am
  66. 2003-11-25 8:55 am
  67. 2003-11-25 11:50 am
  68. 2003-11-25 1:16 pm
  69. 2003-11-25 2:13 pm
  70. 2003-11-25 4:08 pm
  71. 2003-11-25 7:21 pm
  72. 2003-11-25 10:05 pm
  73. 2003-11-26 1:50 pm