Now, that was an interesting reading in the XFree86 forum mailing list. We get individuals, companies like Sun, SciTechSoft, Red Hat etc. ‘fighting’ for issues varying from what XFree86 really needs, down to replacing fontconfig with Sun’s stsf, XFree86 co-founder David Wexelblat saying that XFree is today obsolete and that needs to be replaced with a direct-rendered model (by retaining backwards compatibility), Keith Packard replying as to why a new organization to handle X is needed, and more. Our Take: One thing is clear after reading all these messages: a lot of people are not happy with what’s happening with the development of XFree86. It is obvious that more discussion is needed to decide what’s going to be implemented and what not, and from these emails there, it seems that there was no real/common direction discussed between the interested parties until yesterday. No real communication seemed to exist!
Let’s hope that this open forum list will show what people want and need and will ‘open’ the XFree86 organization in a way that will allow more CVS commits, as the project seems kind of stagnant and doesn’t move as fast as it should have, as some Red Hat employees also noted (for example, direct changing of resolution was introduced just a few months ago with RandR extension, while Windows 95 could do that in 1995).
The XFree86 project always looked a bit conservative to me while more development and openess is needed. There is no need for a “new XFree”, but there is a need for more development and ‘fixing’ on the existing codebase.
Exactly what I’m talking about. This tearing occurs and people delude themselves that X is just ok. You can’t say X is fast and then turn around and say, but “oh it tears when I drag windows”. Seriously, you people are deluding yourselves. Are you going to now tell me that Nvidia’s 4191 driver set is really fast in 2D despite the fact that Nvidia acknowledges there is a performance problem with them? I’m using the 3123s and the performance does NOT match up with the same computer’s performance in Windows XP, as far as 2D goes. Windows do NOT tear when I drag them around in Windows XP. They DO tear when I drag them around (especially diagonally) in X. I challenge all the “X is fast” crowd to drag around some windows and note see visible tearing, stuttering, and redrawing of windows underneath the dragged window. Blaming things on toolkits is so lame – if X can’t handle modern graphical widgets, it is SLOW!
x needs improvement to surpass windows. as of now it doesnt. i dont care what anyone says. the 2d performance is horrible.
Oh, besides, X is network transparent But when run locally this goes over a Unix domain socket which is not very much more than a memcpy(.). SysV shared memory between client and X server is also used when run locally.
Except for the context switching, since every individual drawing operation (or rather, every time read() and write() are called) requires two context switches, one in the client to perform the write() system call and one in the server for the read() system call.
Were shared memory used for the window’s entire raster buffer rather than just passing pixmaps, drawing operations would be isolated from the server. The server would not have to multiplex several drawing operations from client sockets (using O(n) multiplexing mechanisms usually, such as select/poll), and the server wouldn’t block under heavy drawing load.
It makes much more sense to use shared memory as there really is no need for all that context switching since both the server and clients are userspace processes. There shouldn’t be any need to get the kernel involved.
Exactly what I’m talking about. This tearing occurs and people delude themselves that X is just ok.
The tearing occurs when moving between the twinview desktop and the third desktop. The two are joined using xinerama, support for which is fairly new in the NVidia drivers.
Dragging windows on the twinview desktop – which is 3520×1200 pixels in size – shows no such tearing.
You can’t say X is fast and then turn around and say, but “oh it tears when I drag windows”.
It only “tears” in one specific circumstance. On my other machines, which are either single head or just twinview, I don’t see any tearing at all.
Seriously, you people are deluding yourselves. Are you going to now tell me that Nvidia’s 4191 driver set is really fast in 2D despite the fact that Nvidia acknowledges there is a performance problem with them?
The 4191 driver set introduced a new backend acceleration architecture. There is one ackowledged code path which is not optimised – the only time I’ve seen this triggered is when Nautilus is drawing the desktop. This is a bug in the Nvidia drivers rather than anything inherent in X and doesn’t effect me because I don’t use Nautilus to draw the desktop (and it’s not as if the Windows drivers are totally bug free either, as anyone who’s tried to get a stable three card setup will atest).
I’m using the 3123s and the performance does NOT match up with the same computer’s performance in Windows XP, as far as 2D goes. Windows do NOT tear when I drag them around in Windows XP. They DO tear when I drag them around (especially diagonally) in X. I challenge all the “X is fast” crowd to drag around some windows and note see visible tearing,
*drags a mozilla window around frantically*
*notices there is no tearing*
ok.
stuttering
No stuttering.
and redrawing of windows underneath the dragged window.
Oh, I’ll give you this. Not especially noticable however.
Blaming things on toolkits is so lame – if X can’t handle modern graphical widgets, it is SLOW!
It’s not lame, it’s grounded in fact. It’s perfectly possible to write a “widget” set in Windows which exhibits terrible performance (hell, just play with some of the retarded skinned apps for an example). GTK and QT, especially early versions, did some pretty stupid things which impacted on performance.
Suffice to say, I’ve got a huge desktop (5120×1200 pixels in size) and performance is brilliant. Your experiences may well differ, but that could be due to any number of other factors. At the end of the day, it comes down to your anecdote versus my anecdote – which is nothing more than an impass.
Nice try though.
The Fresco project acknowledges the need to provide backward compatibility with X11 apps. I have been tracking their progress and it looks promising. They need more more developers and bug busters..bottom line..more support though.
By the way, this project’s progress is very transparent and the developers seem real nice unlike the closed door feeling that you get from Xfree86. And lest I forget, they try to give good project DOCUMENTATION.
=)
Most ressource are taken by image resizing and text formating, no by the network layer.
On the forum ml, most guy speak about smothness. 3D use double buffering to do that. It could be integrated into X, in the future.
If Microsoft can make a desktop that runs at breakneck speed on a direct display, yet still manages to be completely usable over network, surely the OSS dream team can manage it?
First, I run 1800×1350 every day, and it’s as smooth as running 1600×1200 under XP on the same box (the Windows driver doesn’t support anything between 1600 and 1920, and 1920 won’t work on my monitor). And this isn’t some super state-of-the-art machine; it’s a P3/1000 with an ATI 3D Rage IIc. (I also have a K6-2/500 with a TNT II and a G3/333 with two Rage 128s.)
The only time I ever see any jerkiness in dragging windows under X is when the CPU is pegged. Under Windows, I also get jerkiness when the CPU is pegged–or when WMP9 starts a new video off the playlist, or I open a new document in Word, or Explorer fills in a dynamic context menu (in fact, in that case, the mouse freezes for up to three seconds!).
Actually, what I said above isn’t exactly true. I never see these problems under X when using linux 2.4 with the preemptible kernel and other multimedia patches, or recent linux 2.5 kernels, or FreeBSD; I do see them with stock linux 2.4. But you can’t blame X for problems with the linux kernel–especially problems that have been acknowledged by the core linux developers and will be fixed in the next major version.
However, XFree86 does have noticeable problems with refreshing, which show up on every platform. If I drag a local emacs window over a local mozilla window, the handful-of-refreshes-per-second flashing is pretty cool when you’re on the right drugs, but otherwise it’s annoying. Needless to say, this works perfectly under XP and under OS X.
The problem isn’t the client/server architecture. You only need to do a context switch for each batch, not for each single operation. So, unless your CPU is pegged, or you’re using a really stupid app, this is not a bottleneck. The problem is that the backing store system was designed for refreshing as well as possible on slow remote connections, and needs to be reworked.
Meanwhile, Windows is not “completely usable” over a network. I rely on the fact that I can do run remote applications and native applications with almost no difference. Which is more secure, and which is simpler: making the whole filesystem on my server write-accessible over NFS or SMB, or running an X editor over ssh? Which is more secure, and simpler: poking holes in my firewall for p2p, or running p2p clients remotely over ssh? And the best part is, I can run emacs or gtk-gnutella on my server and interact with it from my workstation whether I’m running linux or Windows or MacOS (as long as I have a decent rootless X server for Windows, of course).
A full remote desktop (like vnc or PCA) is not the same–especially implementations where there’s a single desktop shared by whoever’s logged in locally and all remote users. Remote desktop systems certainly have uses (it’s very nice to be able to help someone by moving their mouse around their screen remotely, so they can see what I’m doing), but they’re not the same uses as transparent client/server windowing.
So, having disagreed with the common wisdom that XFree86 is too jerky and can’t handle high resolutions because of its client/server system, let me switch to the other side and disagree with the common wisdom that XFree86 handles 3D tolerably well.
From what I’ve seen, if you have the right nVidia card, the right version of the linux kernel, an x86, and a closed-source driver from nVidia, you can get pretty good GL performance. But if you have the wrong nVidia card, or a non-nVidia card, or the wrong kernel version, or an OS other than linux, or a non-x86 machine, or you have problems getting the closed-source drivers working, you have nothing.
My Mach64-based card is no great shakes, but under Windows, it can handle everything but recent games. For example, I can play strategy games like Hearts of Iron or Total War; I can even play action games like Grant Theft Auto III if I turn off some of the options. Under linux, I can barely run the simplest OpenGL demos. Worse, if I try to run most OpenGL games, my entire computer often freezes up so badly that I have to use sysrq magic or telnet in to kill the X server (even ctrl+alt+bkspc and ctrl+alt+f1 don’t respond).
Even my newer cards (the Rage 128’s and the TNT II), which are even more usable under MacOS and Windows, are almost completely useless in linux. An open source game like egoboo gets about 1/10th the framerate under linux as under the same hardware on Windows or MacOS.
The problem isn’t that X uses an inferior architecture for GL; in fact, the linux/X GL model is nearly identical to the Windows model, and that’s the problem. Writing GL drivers for either platform is a huge amount of work–and, while anyone who wants to sell a video card will write Windows 3D drivers for it (or at least anyone who wants to sell a video chipset), few do the same for linux/X. And there’s even less support for BSD/X, or linux/X on a non-x86 box, or a kernel that nVidia hasn’t gotten to yet.
You can’t claim that XFree86 does 3D well if most people can’t use that 3D support in most cases.