Seems to be that XFree sucks hard for most people.
I know that my gaming performance under Linux is nowhere near what I get in Windows. RTCW runs below 20fps under Linux where it runs rather smoothly (Maybe 30-40fps?) under Windows.
One thing I really don’t get about windowing systems lately is the transparent/translucent stuff. It may look cool, but I can’t imagine using this at all. If you’re working more than an hour a day in front of a monitor, the last thing you need is for your eyes to be constantly trying to focus on something that is distorted by things behind it.
With regards to X sucking…If you really want a better windowing system for your favourite<Canadian spelling> OS, then develop something else, or better yet join one of the projects that Eugenia mentioned.
Your original comment complained about wanting something more modern, yet at the same time you use GNU/Linux which is modeled after a very old system (UNIX).
Xfree4 is very modular, pretty stable (if you have a decent graphics card), and it is fast. And yes, it can do hardware acceleration on the alpha transparency stuff with xrender. And it has exellent networking support built in.
The very few games that were ported from windows to linux-xfree (ut, quake, rtcw) have been reported to be faster on linux than on windows..
Well, I had vowed to stay out of these discussions, but oh well.
First off, this is a bug fix release. If you don’t like X or XFree, why pollute our discussion with your biases?
And, let me tell you something. For those people out there who do more with their computers than play games and talk on IRC, X is not just fine, it’s fantastic. As a person who spends 95% of his time on his personal computer (running gentoo) writing c & c++ code, I couldn’t ask for a better display system. X gives *me* control over how my desktop looks, feels, and everything in between. My little thinkpad, with it’s antiquated display system (which, shudder, can’t even give me a proper translucent terminal), is the best development environment I could ask for.
Note that I used to run BeOS — I was even a BeOS zealot at one point. My point: OS/display-system/etc zealotry is a fool’s game. Just *use* your damn computer.
Nothing we have right now (computerwise) will be the same ten years from now, probably not even five. THINGS CHANGE. Sometimes they get better, sometimes not. Sometimes we have to grieve (poor, poor BeOS), and then we have to move on and get back to work.
I don’t run my computer so I can compare framerates for GLGears or repeatedly resize my windows to see what the redraw speed is. I use my computer to WRITE CODE, code which I use to DO THINGS. A computer is a tool.
I would also like to see a ‘memory leak fix’ release.
XFree leaks here’n’there, its not fun trying to debug memory leaks in GUI apps, and you discover that half of them ar buried deep under Qt/GTK, in the base X libraries. Not to mention the server itself.
Someone care to take a tour in the XFree libs with valgrind and fix what they find ?
The very few games that were ported from windows to linux-xfree (ut, quake, rtcw) have been reported to be faster on linux than on windows..
Not really, i only saw one review where linux had a couple more of fps than windows, and they where a comparing a tweaked SuSE, with only a few daemons running, with an out of the box windowsME (Sloooow), but anyway, games have a few more fps’s in windows beacause mostly they’re desing to run there and then sometimes get ported to linux, for the most part speed in games (running in the same api: opengl) is derived from the quality of the drivers, most of the work is either done by the GPU or the CPU with the hardware drivers and the api acting in between, operating systems have litle to do with it.
PS:games(ut, quake, rtcw) are NOT ported to xfree (THANK GOD)
PS nº2: (to make this more on topic) xfree is a piece of cr*p
Well, I’d like to avoid getting into the whole X issue again, except to comment:
X on my P4 2.0 works just fine, almost as fast as Win2K on the same machine. However, I’ve got it a little tweeked:
1) I’m running X at -11 priority. You can do this by editing your config file (which one depends on the distro).
2) I’m running all KDE apps at -10 priority. This can be done by changing your xinitrc from
startkde
to
nice -n “-10” startkde
3) I’m running all Konsole apps at 1 priority. This can be done by right-clicking on the konsole link, and changing the text of the link from
konsole
to
nice -n “11” konsole
(note: the 11 is -10 + 11 = 1)
With this setup, my cursor never jumps and the UI never stalls even when running a big compile in the background. That can’t be said for WinXP on the same machine. The only real trouble spot with X11 for me is fonts. The font rendering itself is fine (contrary to what Eugenia has said in the past, FreeType is quite a high-quality renderer (with the patented TrueType interpreter on anyway), and in theory, all TrueType renderers should produce almost identical output anyway due to the hinting mechanism.) However, the sub-pixel rendering algorithm leaves a little to be desired. WinXP renders Tahoma 8-pt on my 133 dpi screen at about 1.5 pixels (one solid pixel, shaded pixels around it). Xft renders the same type more lightly, exactly one pixel thick. That makes it very hard to see on my screen, cuz the pixels are so tiny. This is mostly likely due to three things: 1) Cleartype doesn’t seem to apply all the hints that snap text to certain pixel widths. This makes text overall a little more fuzzy, but on a high res screen, the fuzz blends to make very nice looking text. 2) Cleartype has a different filtering algorithm (in subpixel rendering, btw, the filtering algorithm is used to get rid of the color fringes that result from using the monochrome subpixels for detail) which produces more color-fringing than Xft’s, but leads to nicer letter shapes. 3) Cleartype gamma corrects the anti-aliased text, but Xft doesn’t. Patches for 1) and 3) are in the works, but I don’t know what is being done about 2), or if the Cleartype patents hamper that.
Used to think like you do, that XFree86 was the cause of all of my video problems. Running TuxRacer, QuakeIII and some others was painful, ~ 3 frames per second. Turns out it was the mediocre “nv” video driver that comes with RedHat 7.x couldn’t render 3D for my GeForce2 MX card. A quick stop to NVIDIA and downloaded their GLX and “nvidia” driver fixed everything. Now everything runs at more than 30 frames per second.
Hey don’t blame the XF guys for that. The ‘nv’ driver is actually slightly faster in 2D than the NVIDIA driver. Its just that NVIDIA won’t give specs to the 3D core.
Sorry about that. Didn’t know about NVIDIA not releasing the specs to the 3D core. Wrong choice of words to say it doesn’t do everything I wanted it to.
[i]Not really, i only saw one review where linux had a couple more of fps than windows, and they where a comparing a tweaked SuSE, with only a few daemons running, with an out of the box windowsME[i]
Well, I heard it from a friend:
PC:
nvidia tnt2 with an athlon 500MHz, 256mb ram
(you notice speed differences better on a slow machine :-))
OS:
Suse 7.x box, only tweaking applied was installing the nvidia drivers.
Windows 2000, also with the official nvidia drivers.
Game:
RTCW
His reaction was that rtcw was alot more “playable” (meaning faster) on linux than on windows.
Btw, those 3D games *do* use modules part of xfree86
I’m wondering how much of X itself and how much of the Freetype library would have to be modified to get Cleartype’s results? I mean, I thought X itself could do gamma correction for images if needed. Or is that coming in the future? Gamma correction seems like it would be too useful to be used *only* in font rendering. Seems like something X.org would have done… Oh well, you said it was coming, so it’s possible one way or another.
As for Freetype, yes, it seems as if its font “prettification” algorithms would have to be selectively modified to get Cleartype’s results. And I bet some of it is covered under Microsoft’s patents. It’s probably the really good parts too, just like Apple’s (happily well documented) TrueType hinting patents. Of course, there’s nothing stopping people from turning on the capability during compilation, and it looks fabulous (I’m using version 2.1.2). Makes me wish that X had a QuarkXPress or Adobe InDesign type application to really show off the advanced features of OpenType fonts that Freetype can use. I haven’t the coding skills yet to actually make one, but I’d love to help out.
But I’ve been using the nvidia drivers ever since I figured out how to install them, which was about 5 months ago. And yes, I’d have to agree that the default nv drivers do really suck.
I’m wondering how much of X itself and how much of the Freetype library would have to be modified to get Cleartype’s results?
>>>>>>
It really depends on exactly what Cleartype is doing. If its just a matter of a better filter, then most of the work can be done in Xft. If it needs some work on modifying how hints are applied, then FreeType itself would need to be modified.
I mean, I thought X itself could do gamma correction for images if needed.
>>>>>>
Just on the whole image. (xgamma -gamma <gamma_value>). But it can’t do it for specific bitmaps. I’m not entirely sure why the text itself needs to be gamma corrected, if the whole screen is being corrected, but I do know it makes a difference. You know the ClearType control panel on MS’s site? It basically modifies the gamme correction factor used.
As for Freetype, yes, it seems as if its font “prettification” algorithms would have to be selectively modified to get Cleartype’s results. And I bet some of it is covered under Microsoft’s patents. It’s probably the really good parts too, just like Apple’s (happily well documented) TrueType hinting patents. Of course, there’s nothing stopping people from turning on the capability during compilation, and it looks fabulous (I’m using version 2.1.2).
>>>>>>>
I don’t really know the patent situation right now. So far one patent has been granted, and there are something like 20 in the pipeline. Of course, the filtering algorithm is pretty well documented: http://research.microsoft.com/~jplatt/cleartype/
Beware, lots of math! However, exactly how Cleartype deals with hints and exactly how the gamma correction works are still unknowns
I don’t understand why people keep bashing Xfree86. It seems to work fine for me, although I am not a big gamer, but video is pretty darn good. And the key word here is free, and if you don’t like it buy windows. Just mho
“And the key word here is free, and if you don’t like it buy windows.”
Well I guess if it’s free nobody should make suggestions/complaints about it’s performance in certain aspects! I mean really, it’s only the dominant graphical environment for Linux (*nix as well? I’ve only had Linux). We should just be happy and not give our feedback.
Well I guess if it’s free nobody should make suggestions/complaints about it’s performance in certain aspects! I mean really, it’s only the dominant graphical environment for Linux (*nix as well? I’ve only had Linux). We should just be happy and not give our feedback
>>>>>>>>>
We should definately give feedback, but not bitch unless we can certify there’s a problem. With MS, if WinXP is slow, we can go bitch (the GUI is slow, fix it). If on Linux, the GUI is slow, just blaming XFree is a cop-out. You first have to take care to see exactly where the problem is, and then take your problem to the appropriate people. Also, just saying “X is slow, when will it die” is counter productive. A much more useful comment would be “Function XPutImage performs very poorly with the ATI driver, could you take a look at that issue?”
Slackware only comes with Xfree[0] if you wanted one of the other projects you would have to download and install it your self or look for a package at
X detractors seem to concentrate on speed of the GUI – that’s bever been a problem for me, since ’92, but I guess I know what I’m doing?
BUT… please also remember the PLATFORM INDEPENDENT, REMOTE DISPLAY CAPABILITY of X, which NOTHING ELSE HAS!
NO, Windows Terminal Server, VNC, PC Anywhere, etc. do NOT substitute for this.
Every app. on your desktop could be running on a different computer, even different KINDS of computers.
This allows very thin, cheap client systems, easy support of centralized applications, COMPLETELY locked down, controlled desktops in environments where that is desirable, and better response than web-based client-server environments.
X is definitely not the only client/server GUI, just the most popular. Among the others…
– QNX’s Photon GUI is not only network transparent, it has some pretty spiffy features like the ability to split a client’s display stream to multiple servers. There is a version of the Photon GUI server that runs under windows also, so there’s a small cross-platform aspect.
– VNC isn’t elegant, but it’s the most portable system possible
– Fresco is has an extremely versatile client/server design, using CORBA for everything
– PicoGUI has a consistent and speedy interface even over slow network connections, one effect of implementing most functionality on the server side
I’m sure there are others… but not only have other GUIs come close to X in network transparency, they have gone far beyond it.
Yeah, I’m sure you guys know I’m a Fresco supporter 😛
Now here’s my rant. XFree is good enough for today. Have you guys played Quake and TuxRacer on XFree using nVidia’s drivers: it was fast!
But I don’t think X can hold up that long. It has too many design flaws that could mean suicide in the future (distant, mind you). I believe the future goes to something that does it differently than X11, like Fresco, instead of one that does it like X11 but just fixing some flaws (like speed, semi transperancy).
One for those who think semitransperancy is not important, guess what: It has to do with a lot more than eye candy. Office for example uses alpha blending a lot for the pie, charts and graphs in Excel.
Alex: BUT… please also remember the PLATFORM INDEPENDENT, REMOTE DISPLAY CAPABILITY of X, which NOTHING ELSE HAS!
Probably the most important issue on why Fresco choosed CORBA is the remote display capablity (vector objects on Fresco is different from raster objects of X11, BTW).
Besides, ever considered why remote display capablities have never been an focus of Windows and Mac OS? A LOT of people DON’T need it, and the rest who do are shrinking rapidly.
Besides, ever considered why remote display capablities have never been an focus of Windows and Mac OS? A LOT of people DON’T need it, and the rest who do are shrinking rapidly.
It may be that the majority of users don’t need remote display, but I know I use remote display every day:
– Running monitor apps on my server, displayed on my desktop machine
– Running applications from my desktop machine displayed on my laptop via wireless, to have quick access to software I don’t have installed while away from my room
– Developing embedded systems software, it’s a lot easier to compile on the desktop and display remotely on the embedded system rather than cross compile and upload/flash/whatever every time. This is also helpful for prototyping software in environments that would be too bulky for the target device, things like Python or SQL databases on small handhelds.
But I don’t think X can hold up that long. It has too many design flaws that could mean suicide in the future (distant, mind you).
Please elaborate. I mean no offense but I’m getting pretty tired of people claiming to suffer from design flaws without ever explaining them. You see, if you can throw it in the conversations perhaps we can discuss it and come to some sort of fix. That’s supposed to be the power of Open Source right? BTW, I don’t consider missing alpha-translucancy a design flaw. By the time someone figures out a real use for it X will probably support it (through RENDER or perhaps some kind of OpenGL compositing system a la Quartz Extreme).
I remember someone, not sure it was you, citing in this forum that X days were pretty much numbered as screen resolutions would get higher and higher. The idea was that larger resolutions meant more traffic over the wire. In real life this situations seems to have been reversed. That is, Xfree86 from 10 years ago was much muuch slower than Xfree today at 4 times the resolution. The person seemingly forgot that with time both CPU/GPU power and bandwidth grow (GPU power almost exponentionally). My XFree86/NVidia setup is probably one of the fastest graphics interfaces you can have on ANY desktop today.
I believe the future goes to something that does it differently than X11, like Fresco, instead of one that does it like X11 but just fixing some flaws (like speed, semi transperancy).
So you are willing to throw away everything and start with something that has never proven itself and really has NILL support from the community (compared to XFree). I would say that is rooting for a change just for the sake of the change. Berlin, and as an extension Fresco, have been in the design phase for more than 5 years. I really think its time for an implementations of sorts. I.e. something I can program to. It looks really nice on paper, but that’s pretty much it.
It look just like before (as what Eugenia did mention earlier), the argument keep on going… But could sombody enlighten me where to get the update since I can’t find it at xfree86.org.
I don’t mean to downgrade any project but it seem Freesco progress is just tooooo slow. Yeah Mr Rajan_r, you can say that Freesco/Berlin is good in this, that etc. but compared to other project it is just still vapour. Take a look at PicoGUI, Micah make a good progress compared to the age of the project itself. Microwindows/NanoX although look stagnent currently, I am coding with it now. Xfree86 progress is slow too but at least everybody can develop with it whereas Freesco just stil not usable as it is since long ago.
Without progress, project like Cosmoe will be far better than Freesco/Berlin. For me the good is those that can be use either for development or daily works.
Have anybody ever tried OpenGUI? I know their website at http://www.tutok.sk but never give it a try. Looking at the version number, it should have been matured enough.
I just don’t like the screenshot, too classic like DOS era.
One of the big problems with X from a user perspective is that it can be very flickery sometimes… is there a way to enable double buffering in XFree?
Flickering display is usually a problem of the toolkit. Recent versions of Qt and GTK+ (2.0) have pretty much eliminated all the flickering behaviour. Flickering is particularly bad with GTK+ 1.x applications BTW.
I tried to try it.. couldn’t get it to compile, and the source code is IMHO terribly messy. (Mostly uncommented, several 5000+ line files, most variables and comments in a language I can’t read
From the design though, it looks like it’s your classic C++ widget toolkit layered on top of a graphics library. They even mention that the graphics library has a similar API to the Borland BGI….
As far as I can tell, it only supports one app at a time, and there’s no layout engine for the widgets.
Again, just my opinion and what I gathered without even using it, so if someone else here has used it please correct me.
That’s not the whole problem… Some of the apps I use (Eterm especially) get flickery where it’s obviously the apps’ fault, but if I’m loading down my system with something else and drag a window across galeon, the exposed areas flicker. I thought there was a switch you could enable in X to make it buffer each window, similarly to what OS X does. (I suppose nobody really uses it because it uses too much memory?)
If X is as bad as its’ detractors claim, why is it that major CGI/FX shops are switching to Linux for their animators workstations. I do not know much about animation, but it seems that this application would require high performance graphics. Are they not using X?
That’s not the whole problem… Some of the apps I use (Eterm especially) get flickery where it’s obviously the apps’ fault, but if I’m loading down my system with something else and drag a window across galeon, the exposed areas flicker
That is still an application problem. Many applications don’t bother to optimize their handling of expose events. And if they don’t utilize double buffering i.e. through offscreen rendering you will get flickering. Applications based on gtk+ 2.0 do not flicker (as are applications using the latest Qt toolkit). The toolkit libraries take care of expose event optmimization and offscreen rendering.
Does anyone know of some sort of proxy X server that will let me re-connect to a root window or application?
Many times, I find the need to disconnect an X app but keep it running and migrate it to a different X server. From what I know of Xlib it dosen’t lend itself to this particular feature very readily, so it would probably need to be a feature of the X server itself, not the protocol.
Anyone have any thoughts or seen this sort of feature in an X server or a toolkit for that matter? (I’d rather not port my apps to a new toolkit, but it’s still a thought)
Anyone have any thoughts or seen this sort of feature in an X server or a toolkit for that matter? (I’d rather not port my apps to a new toolkit, but it’s still a thought)
The easiest solution is to just setup a VNC server on a box somewhere and run the applications in question on it. You can then connect/disconnect from different X clients. You will of course loose some performance because of VNC but that is the tradeoff. If you work in a LAN enviroment performance should still be very good (less so over slower lines).
One of the big problems with X from a user perspective is that it can be very flickery sometimes… is there a way to enable double buffering in XFree?
I don’t mean to pick on this guy as everyone has already mentioned this is probably a GTK issue rather than a X issue and it sounded like an honest mistake. What is important is the rather constant bashing of X by people who can’t seem to keep clear what issues are:
1) X related
2) window manager related
3) graphic library related
4) setting in any of the above related
Issues like bad fonts (X doesn’t meaningfully have any fonts and can support fonts in a way vastly more sophisticated than either Windows or Mac were such fonts available); are another example. I think the people who bash X all the time should bother to figure out if X is even the problem.
Please elaborate. I mean no offense but I’m getting pretty tired of people claiming to suffer from design flaws without ever explaining them
Ok, let me see, x-windows uses ip messaging, wich is a whole lot slower than the normal kernel system messaging, that means x-windows has a high latency.
An example: Imagine you’re running a x app, you click you mouse over, say, a button, that’s called and event, events are usually handled through something called a message loop, for far it’s ok, but now x-windows will have to use the tcp/ip protocol to send that event message to the x-server (because your using a client), even when running on a localhost (that’s your computer), that message can of course be broken into packet’s, wich can arrive at any order to server wich then reverses the whole process bla bla bla (i’m getting tired of typing) wich in the end means a very unresponsive gui.
Ok, let me see, x-windows uses ip messaging, wich is a whole lot slower than the normal kernel system messaging, that means x-windows has a high latency.
Ehm no, the X protocol only uses IP based messaging if it’s talking to a remote client/server (i.e. over the Internet). When your X client is run on the same machine as the X server they use UNIX sockets for I/O, and sockets, at least under Linux, are bloody fast. The bottleneck is usually just the speed of your videocard, rather than X’s supposedly slow messaging! For images local client can also use the X Shared Memory Extension. That means that both client and server can read/write to the same memory address meaning there is practically no overhead when transferring large data sets (pixmaps, video, etc).
An example: Imagine you’re running a x app, you click you mouse over, say, a button, that’s called and event, events are usually handled through something called a message loop, for far it’s ok, but now x-windows will have to use the tcp/ip protocol to send that event message to the x-server (because your using a client), even when running on a localhost (that’s your computer), that message can of course be broken into packet’s, wich can arrive at any order to server wich then reverses the whole process bla bla bla (i’m getting tired of typing) wich in the end means a very unresponsive gui.
You should better inform yourself since the above is just plain wrong. Local client don’t go through the TCP/IP layer at all! This is different for remote client of course, since they will need to talk over TCP/IP as that’s the only reliable connection between two hosts.
There’s more but i’m tired of typing so:
Yeah, better find some other source for your information. This is what I mean with people blasting X when in reality they actually have no clue!
Oh that page you were referring to is veeery old, I think I remember reading that back in the early 90’s or something, things have changed quite a bit since then. Get with the program 🙂
Ehm no, the X protocol only uses IP based messaging if it’s talking to a remote client/server (i.e. over the Internet). When your X client is run on the same machine as the X server they use UNIX sockets for I/O, and sockets, at least under Linux, are bloody fast. The bottleneck is usually just the speed of your videocard, rather than X’s supposedly slow messaging! For images local client can also use the X Shared Memory Extension. That means that both client and server can read/write to the same memory address meaning there is practically no overhead when transferring large data sets (pixmaps, video, etc).
WRONG AF_UNIX socket family like the AF_INET uses the tcp/ip protocol, the ones that bypass tcp/ip are raw sockets (AF_NS i think, but i’m sure), and sorry if i’m wrong (and i might be) but x-windows doesn’t use these, so x-windows is stuck with packets and high latency i’m afraid, of course when i say “high latency” it’s because i’m comparing x-windows messaging latency with for example microsoft windows internal messaging latency, not ms windows tcp/ip latency, linux is quite able to crush windows there
The problem here is that x-windows was never meant to be working as a real-time multimedia gui, and performs very badly as such, despite all the “frankensteinnish” updates to it, it’s like trying to make an elephant run like a chita, when it just was not meant to do so.
The problem here is that x-windows was never meant to be working as a real-time multimedia gui, and performs very badly as such, despite all the “frankensteinnish” updates to it, it’s like trying to make an elephant run like a chita, when it just was not meant to do so.
Even if you consider X’s client/server messaging slow (I disagree with this) that has very little bearing on X’s multimedia quality. Video playback uses shared memory buffers. The only command the X server recieves is to copy that buffer to the screen once per frame. DRI uses a similar approach, where a library on the client side accesses everything as directly as possible.
There have been attempts to make GUIs where everything is accessed directly on the client, or through shared memory. I haven’t seen one of these projects (DinX, DirectFB…) produce anything as flexible and secure as X.
WRONG AF_UNIX socket family like the AF_INET uses the tcp/ip protocol, the ones that bypass tcp/ip are raw sockets (AF_NS i think, but i’m sure), and sorry if i’m wrong (and i might be) but x-windows doesn’t use these, so x-windows is stuck with packets and high latency i’m afraid
Yes indeed, you are wrong. For local connections (client/server on same machine) X uses the most efficient transport mechanism, and that is not TCP/IP. Heck, there are even implementation that use shared memory IPC for the local transport! If you ever run XFree86 on Linux check out the /tmp/.X11-unix directory. You will find a nice
local X0 socket there. That is not a TCPIP socket BTW. Also do “man unix” (7) and read up on Unix sockets. I suspect you are confused. And as a last thing, it is called
“X Window System” or “X11”, not “x-windows”
so x-windows is stuck with packets and high latency i’m afraid, of course when i say “high latency”
Wrong again. And please, show us some numbers next time. You are simply drawing conclusions from wrong facts.
The problem here is that x-windows was never meant to be working as a real-time multimedia gui, and performs very badly as such, despite all the “frankensteinnish” updates to it, it’s like trying to make an elephant run like a chita, when it just was not meant to do so.
I can do multimedia just fine on X, thank you very much. The ultimate multimedia experience are probably those FPS games (Quake, UT, etc) and they typically rule under Linux/XFree86. So please, stop spreading misinformation.
BTW I’m looking forward to running DOOM3 on XFree86 and kicking some Windows butt!
fooks: Ehm no, the X protocol only uses IP based messaging if it’s talking to a remote client/server (i.e. over the Internet). When your X client is run on the same machine as the X server they use UNIX sockets for I/O, and sockets, at least under Linux, are bloody fast.
No, they’re not fast, not compared to pipes, SVR4 message queues, and especially not compared to shared memory, which is what X should use by default. Instead, use of shared memory in X is achieved through the MIT shm extension, Unfortunately, this requires application specific support, and virtually nothing makes use of this extension.
The real architectural problems with X stem from its network transparency. The protocol contains a whole slew of proprietary drawing commands. In my opinion, a significantly better approach is to have the applications draw directly into a shared memory buffer, then the display server is only responsible for compositing these buffers into what’s displayed on your screen.
This is the Quartz, where applications draw directly into or write PDF commands to a shared CGContext buffer. Such an approach is infinitely faster.
Micah: Even if you consider X’s client/server messaging slow (I disagree with this) that has very little bearing on X’s multimedia quality. Video playback uses shared memory buffers. The only command the X server recieves is to copy that buffer to the screen once per frame.
Wrong, this uses a hardware overlay, where an application writes directly to the video hardware. This has only become usable in XFree86 in the past two years through the Xv extension. Before that, the only alternative was the horrible DGA extension, which the XFree86 developers rightly disable by default in XF4.
Aren’t you the main PicoGUI developer? What are they teaching you over there at CU anyway? I thought you’d know this… perhaps you haven’t attempted to implement overlay support into PicoGUI yet. At any rate I have my own issues with PicoGUI’s architecture, namely that it doesn’t solve the architectural problems with X that I’ve just stated… it comes off like X rehashed.
Hug0: AF_UNIX socket family like the AF_INET uses the tcp/ip protocol
Perhaps you’re confused by the type/protocol parameters you pass socket() when you create a UNIX domain socket. Indeed you’re probably passing SOCK_STREAM, which with AF_INET sockets makes use of TCP. But no, Unix domain sockets are essentially bidirectional pipes and for obvious reasons don’t require the use of TCP.
fooks: I can do multimedia just fine on X, thank you very much.
I mean, this just isn’t true. You lack an underlying media architecture to really do any sort of media work. Show me a decent linear video editing program which supports a wide variety of codecs that you can run in X.
No, they’re not fast, not compared to pipes, SVR4 message queues, and especially not compared to shared memory, which is what X should use by default.
By default local connections use the fastest transport available. That really depends on the X implementation and the underlying OS. Either way, the local transport is not the bottleneck most of the time.
Instead, use of shared memory in X is achieved through the MIT shm extension, Unfortunately, this requires application specific support, and virtually nothing makes use of this extension.
Popular X toolkits do use it (when told to) and by extensions so do all the applications written for those toolkits (GTK+ 2.0 for example).
I mean, this just isn’t true. You lack an underlying media architecture to really do any sort of media work. Show me a decent linear video editing program which supports a wide variety of codecs that you can run in X.
I personally use Cinelerra ( http://www.heroinewarrior.com ), but it’s a real bitch to compile. On the playback side I use mplayer ( http://www.mplayerhq.hu ) for anything video and AlsaPlayer ( http://alsaplayer.org ) for audio. I also have a copy of Houdini for Linux (a $3000 3D Animation Tool) and it really rocks, wish I had more time to learn it. So somehow I don’t really miss an “underlying media architecture”, whatever that might be. I do however look forward to an OpenML implementation for Linux ( http://www.khronos.org ).
Aren’t you the main PicoGUI developer? What are they teaching you over there at CU anyway? I thought you’d know this… perhaps you haven’t attempted to implement overlay support into PicoGUI yet. At any rate I have my own issues with PicoGUI’s architecture, namely that it doesn’t solve the architectural problems with X that I’ve just stated… it comes off like X rehashed.
I started PicoGUI a couple years before I started attending CU, I’ve been programming long before I saw anything about computers in school
Obviously you don’t know much about PicoGUI’s architecture…
YES, it’s client/server. BUT, it does this almost completely differently from X. With widgets in the server, there’s no IPC mechanism at all between them and the video driver.
PicoGUI’s architecture is also flexible enough that in the future there will be support for other methods of connecting the client and server. Shared memory message queues (PicoGUI already supports SHM bitmaps) or even plain procedure calls by dlopen()’ing client apps into the server. With this flexibility, various systems can easily make speed/security tradeoffs.
I know the difference between Xvideo and MIT SHM. YUV overlays work pretty much the same in any GUI, so it’s not really worth going into. The example I had in mind were games that do all the rendering themselves then use SHM and XPutImage to blit to the screen. If the game is double-buffered anyway, this incurs almost zero overhead. Not like SHM in X is a new thing either…
Here are some latency results i found, i still think that X11 falls under the AF_UNIX category:
Those figures are microseconds! Also benchmark figures are usually obsolete by the time they’re posted. But lets run with them for now. This link actually proves that the local transport is NOT the bottleneck at all. My guess is that the latency you experience has nothing to do with X perse.
The tests show that on Linux 2.4, UNIX domain sockets are essentially as fast as pipes, and both are a good bit faster than SVR4 messege queues.
Instead, use of shared memory in X is achieved through the MIT shm extension, Unfortunately, this requires application specific support, and virtually nothing makes use of this extension.
>>>>>>
Umm, KDE has support for MIT-SHM built into its image classes (like KPixmapIO).
The real architectural problems with X stem from its network transparency. The protocol contains a whole slew of proprietary drawing commands. In my opinion, a significantly better approach is to have the applications draw directly into a shared memory buffer, then the display server is only responsible for compositing these buffers into what’s displayed on your screen.
>>>>>>>>>>>
A) That takes up tons of memory (for the off-screen buffers) and worse, blows your caches.
B) That limits you to software rendering. I wrote a highly tweek, very simple imaging library a few years ago. It performed less than half as well as using X calls.
This is the Quartz, where applications draw directly into or write PDF commands to a shared CGContext buffer. Such an approach is infinitely faster.
>>>>>>>>>>
Right. Which explains why Mac OS X has such great performance? Please. The future is hardware acceleration. Due to the inherent latency of accessing an off-CPU device, almost all hardware is designed to execute command buffers rather than be programmed directly. X (with the DRI) handles this model naturally. Quartz doesn’t.
Wrong, this uses a hardware overlay, where an application writes directly to the video hardware.
>>>>>>>
This has only become usable in XFree86 in the past two years through the Xv extension. Before that, the only alternative was the horrible DGA extension, which the XFree86 developers rightly disable by default in XF4.
>>>>>>>
Now that’s just moronic. There is nothing that bad about DGA except for the fact that direct graphics access is inherently dangerous, and these days, mostly useless. The real reason DGA is disabled is because not all drivers in XFree 4.x handle it properly.
I mean, this just isn’t true. You lack an underlying media architecture to really do any sort of media work. Show me a decent linear video editing program which supports a wide variety of codecs that you can run in X.
>>>>>>>>>
You mean like aRts and GStreamer? Exactly what are you missing?
Fine! i give up, your’re right, using a client/server model is great, it doesn’t matter that you’ll probably only need to have one client for desktop use.
No having a server running for just 1 client isn’t a complete waste.
Prove it. Or shut up. Actually, I’ll make it easy for you: you’re wrong. The tests show that on Linux 2.4, UNIX domain sockets are essentially as fast as pipes, and both are a good bit faster than SVR4 messege queues.
LOL, hi there Linux zealot. Nice attitude, are you taking an attack on X personally? It’s also nice you assumed I was talking about Linux. Unfortunately, I wasn’t. Benchmark SVR4 message queues on an OS I care about, like Solaris, Irix, or FreeBSD. I think you’ll find things are somewhat different.
A) That takes up tons of memory (for the off-screen buffers) and worse, blows your caches.
And the shm extension doesn’t? Ever look at how much memory your X server is using?
That limits you to software rendering.
Uhh, as opposed to what? Your alternatives in X are writing to a socket or to a shared memory buffer. If you’re using a Quartz renderer you are also writing to a shared memory buffer, except it’s one which can be immediately composited by the WindowServer and displayed.
I wrote a highly tweek, very simple imaging library a few years ago. It performed less than half as well as using X calls.
And of course you were using a Quartz renderer, right?
You mean like aRts and GStreamer? Exactly what are you missing?
gstreamer is the only thing close to what I want, and has heinous performance issues and is likewise buggy as hell. I’d rather have usable, developed software like Quicktime or DirectShow. Show me a decent non-linear video editor for your OS of choice that isn’t OS X or Windows, then let’s look at how infinitely unusable it is compared to Final Cut Pro or Premiere.
LOL, hi there Linux zealot. Nice attitude, are you taking an attack on X personally? It’s also nice you assumed I was talking about Linux. Unfortunately, I wasn’t. Benchmark SVR4 message queues on an OS I care about, like Solaris, Irix, or FreeBSD. I think you’ll find things are somewhat different.
Yup, and slowness on Solaris is reason why people b&m about X’s slowness on Linux, right?
And the shm extension doesn’t? Ever look at how much memory your X server is using?
Remember that X mmaps videocard’s memory, so if your videocard has 64MB or 128MB RAM, it will show up somewhere, right?
Do you remember, that what kind of machines was X’s target, when originally designed?
Uhh, as opposed to what? Your alternatives in X are writing to a socket or to a shared memory buffer. If you’re using a Quartz renderer you are also writing to a shared memory buffer, except it’s one which can be immediately composited by the WindowServer and displayed.
Videocards since windows 3.1 have “2D acceleration”. It means, that you can draw graphics primitives in hardware. Believe or not, X servers use this. So your program does not turn pixels on and off in framebuffer when drawing line (like Quartz does), but tells the card to draw line, circle, whatever, solid, dashed etc.
Arguing that X is slow and showing Quartz as oppossing example is bad idea. Try running X and Quartz on the same machine, do some benchmarks and then try again…
gstreamer is the only thing close to what I want, and has heinous performance issues and is likewise buggy as hell. I’d rather have usable, developed software like Quicktime or DirectShow. Show me a decent non-linear video editor for your OS of choice that isn’t OS X or Windows, then let’s look at how infinitely unusable it is compared to Final Cut Pro or Premiere.
I agree, gstreamer is buggy as hell. But multimedia architecture doesn’t have anything with original claim “X sucks, without reason”. However, I’m willing to wait and have multimedia architecture with sane api, as oppossed to DirectShow or Quicktime.
Also note, that windows-based NLEs usually use vfw or quicktime, not directshow. DS is playback-oriented architecture, with editing services thrown as afterthought.
isn’t it time we’ll have something more modern?
but it seems the need for backwards compatability will leave us with X for a long l-o-n-g time. oh well, at least it’s improving.
I mean really… hasn’t everyone got tired of these conversations?
Person 1: XFree is old. We need something modern.
Rayiner: No, XFree is kick ass.
Person 3: Well, there is DirectFB.
Person 4: And there is Berlin/Fresco too.
Person 5:Hey, do not forget Cosmoe!
Person 1:Yeah, but XFree is really old…
Rayiner: Naah… XFree is great. And fast.
Yawn….
If you mean something that only recently came into existence, you probably shouldn’t be using UNIX in the first place.
If you mean something that only recently came into existence, you probably shouldn’t be using UNIX in the first place.
I use Quartz, and it’s beautiful.
you probably shouldn’t be using UNIX in the first place.
why shouldn’t I? I find linux to be a very nice system, and i use it as my main OS.
Seems to be that XFree sucks hard for most people.
I know that my gaming performance under Linux is nowhere near what I get in Windows. RTCW runs below 20fps under Linux where it runs rather smoothly (Maybe 30-40fps?) under Windows.
One thing I really don’t get about windowing systems lately is the transparent/translucent stuff. It may look cool, but I can’t imagine using this at all. If you’re working more than an hour a day in front of a monitor, the last thing you need is for your eyes to be constantly trying to focus on something that is distorted by things behind it.
With regards to X sucking…If you really want a better windowing system for your favourite<Canadian spelling> OS, then develop something else, or better yet join one of the projects that Eugenia mentioned.
Your original comment complained about wanting something more modern, yet at the same time you use GNU/Linux which is modeled after a very old system (UNIX).
Xfree4 is very modular, pretty stable (if you have a decent graphics card), and it is fast. And yes, it can do hardware acceleration on the alpha transparency stuff with xrender. And it has exellent networking support built in.
The very few games that were ported from windows to linux-xfree (ut, quake, rtcw) have been reported to be faster on linux than on windows..
i really wish they’d fix the i830 support…8-bit 1024×768 really really sucks. i guess that’s what i get for buying a stupid gateway laptop
Well, I had vowed to stay out of these discussions, but oh well.
First off, this is a bug fix release. If you don’t like X or XFree, why pollute our discussion with your biases?
And, let me tell you something. For those people out there who do more with their computers than play games and talk on IRC, X is not just fine, it’s fantastic. As a person who spends 95% of his time on his personal computer (running gentoo) writing c & c++ code, I couldn’t ask for a better display system. X gives *me* control over how my desktop looks, feels, and everything in between. My little thinkpad, with it’s antiquated display system (which, shudder, can’t even give me a proper translucent terminal), is the best development environment I could ask for.
Note that I used to run BeOS — I was even a BeOS zealot at one point. My point: OS/display-system/etc zealotry is a fool’s game. Just *use* your damn computer.
Nothing we have right now (computerwise) will be the same ten years from now, probably not even five. THINGS CHANGE. Sometimes they get better, sometimes not. Sometimes we have to grieve (poor, poor BeOS), and then we have to move on and get back to work.
I don’t run my computer so I can compare framerates for GLGears or repeatedly resize my windows to see what the redraw speed is. I use my computer to WRITE CODE, code which I use to DO THINGS. A computer is a tool.
End soapbox.
I thought that the next version after 4.2 will be 4.3? when 4.3 will be out anyway?
Some info: http://www.osnews.com/story.php?news_id=1294
I would also like to see a ‘memory leak fix’ release.
XFree leaks here’n’there, its not fun trying to debug memory leaks in GUI apps, and you discover that half of them ar buried deep under Qt/GTK, in the base X libraries. Not to mention the server itself.
Someone care to take a tour in the XFree libs with valgrind and fix what they find ?
Hopefully it will remain for a long long time, so I can get some work done. And use my PS2 when I feel the need to play games.
The very few games that were ported from windows to linux-xfree (ut, quake, rtcw) have been reported to be faster on linux than on windows..
Not really, i only saw one review where linux had a couple more of fps than windows, and they where a comparing a tweaked SuSE, with only a few daemons running, with an out of the box windowsME (Sloooow), but anyway, games have a few more fps’s in windows beacause mostly they’re desing to run there and then sometimes get ported to linux, for the most part speed in games (running in the same api: opengl) is derived from the quality of the drivers, most of the work is either done by the GPU or the CPU with the hardware drivers and the api acting in between, operating systems have litle to do with it.
PS:games(ut, quake, rtcw) are NOT ported to xfree (THANK GOD)
PS nº2: (to make this more on topic) xfree is a piece of cr*p
Well, I’d like to avoid getting into the whole X issue again, except to comment:
X on my P4 2.0 works just fine, almost as fast as Win2K on the same machine. However, I’ve got it a little tweeked:
1) I’m running X at -11 priority. You can do this by editing your config file (which one depends on the distro).
2) I’m running all KDE apps at -10 priority. This can be done by changing your xinitrc from
startkde
to
nice -n “-10” startkde
3) I’m running all Konsole apps at 1 priority. This can be done by right-clicking on the konsole link, and changing the text of the link from
konsole
to
nice -n “11” konsole
(note: the 11 is -10 + 11 = 1)
With this setup, my cursor never jumps and the UI never stalls even when running a big compile in the background. That can’t be said for WinXP on the same machine. The only real trouble spot with X11 for me is fonts. The font rendering itself is fine (contrary to what Eugenia has said in the past, FreeType is quite a high-quality renderer (with the patented TrueType interpreter on anyway), and in theory, all TrueType renderers should produce almost identical output anyway due to the hinting mechanism.) However, the sub-pixel rendering algorithm leaves a little to be desired. WinXP renders Tahoma 8-pt on my 133 dpi screen at about 1.5 pixels (one solid pixel, shaded pixels around it). Xft renders the same type more lightly, exactly one pixel thick. That makes it very hard to see on my screen, cuz the pixels are so tiny. This is mostly likely due to three things: 1) Cleartype doesn’t seem to apply all the hints that snap text to certain pixel widths. This makes text overall a little more fuzzy, but on a high res screen, the fuzz blends to make very nice looking text. 2) Cleartype has a different filtering algorithm (in subpixel rendering, btw, the filtering algorithm is used to get rid of the color fringes that result from using the monochrome subpixels for detail) which produces more color-fringing than Xft’s, but leads to nicer letter shapes. 3) Cleartype gamma corrects the anti-aliased text, but Xft doesn’t. Patches for 1) and 3) are in the works, but I don’t know what is being done about 2), or if the Cleartype patents hamper that.
Used to think like you do, that XFree86 was the cause of all of my video problems. Running TuxRacer, QuakeIII and some others was painful, ~ 3 frames per second. Turns out it was the mediocre “nv” video driver that comes with RedHat 7.x couldn’t render 3D for my GeForce2 MX card. A quick stop to NVIDIA and downloaded their GLX and “nvidia” driver fixed everything. Now everything runs at more than 30 frames per second.
the mediocre “nv”
>>>>>>>
Hey don’t blame the XF guys for that. The ‘nv’ driver is actually slightly faster in 2D than the NVIDIA driver. Its just that NVIDIA won’t give specs to the 3D core.
Sorry about that. Didn’t know about NVIDIA not releasing the specs to the 3D core. Wrong choice of words to say it doesn’t do everything I wanted it to.
[i]Not really, i only saw one review where linux had a couple more of fps than windows, and they where a comparing a tweaked SuSE, with only a few daemons running, with an out of the box windowsME[i]
Well, I heard it from a friend:
PC:
nvidia tnt2 with an athlon 500MHz, 256mb ram
(you notice speed differences better on a slow machine :-))
OS:
Suse 7.x box, only tweaking applied was installing the nvidia drivers.
Windows 2000, also with the official nvidia drivers.
Game:
RTCW
His reaction was that rtcw was alot more “playable” (meaning faster) on linux than on windows.
Btw, those 3D games *do* use modules part of xfree86
I’m wondering how much of X itself and how much of the Freetype library would have to be modified to get Cleartype’s results? I mean, I thought X itself could do gamma correction for images if needed. Or is that coming in the future? Gamma correction seems like it would be too useful to be used *only* in font rendering. Seems like something X.org would have done… Oh well, you said it was coming, so it’s possible one way or another.
As for Freetype, yes, it seems as if its font “prettification” algorithms would have to be selectively modified to get Cleartype’s results. And I bet some of it is covered under Microsoft’s patents. It’s probably the really good parts too, just like Apple’s (happily well documented) TrueType hinting patents. Of course, there’s nothing stopping people from turning on the capability during compilation, and it looks fabulous (I’m using version 2.1.2). Makes me wish that X had a QuarkXPress or Adobe InDesign type application to really show off the advanced features of OpenType fonts that Freetype can use. I haven’t the coding skills yet to actually make one, but I’d love to help out.
–JM
Um, when I said “Happily well documented Truetype patents”, there was meant to be some sarcasm there. Thank you.
–JM
But I’ve been using the nvidia drivers ever since I figured out how to install them, which was about 5 months ago. And yes, I’d have to agree that the default nv drivers do really suck.
I’m wondering how much of X itself and how much of the Freetype library would have to be modified to get Cleartype’s results?
>>>>>>
It really depends on exactly what Cleartype is doing. If its just a matter of a better filter, then most of the work can be done in Xft. If it needs some work on modifying how hints are applied, then FreeType itself would need to be modified.
I mean, I thought X itself could do gamma correction for images if needed.
>>>>>>
Just on the whole image. (xgamma -gamma <gamma_value>). But it can’t do it for specific bitmaps. I’m not entirely sure why the text itself needs to be gamma corrected, if the whole screen is being corrected, but I do know it makes a difference. You know the ClearType control panel on MS’s site? It basically modifies the gamme correction factor used.
As for Freetype, yes, it seems as if its font “prettification” algorithms would have to be selectively modified to get Cleartype’s results. And I bet some of it is covered under Microsoft’s patents. It’s probably the really good parts too, just like Apple’s (happily well documented) TrueType hinting patents. Of course, there’s nothing stopping people from turning on the capability during compilation, and it looks fabulous (I’m using version 2.1.2).
>>>>>>>
I don’t really know the patent situation right now. So far one patent has been granted, and there are something like 20 in the pipeline. Of course, the filtering algorithm is pretty well documented: http://research.microsoft.com/~jplatt/cleartype/
Beware, lots of math! However, exactly how Cleartype deals with hints and exactly how the gamma correction works are still unknowns
I don’t understand why people keep bashing Xfree86. It seems to work fine for me, although I am not a big gamer, but video is pretty darn good. And the key word here is free, and if you don’t like it buy windows. Just mho
“And the key word here is free, and if you don’t like it buy windows.”
Well I guess if it’s free nobody should make suggestions/complaints about it’s performance in certain aspects! I mean really, it’s only the dominant graphical environment for Linux (*nix as well? I’ve only had Linux). We should just be happy and not give our feedback.
Well I guess if it’s free nobody should make suggestions/complaints about it’s performance in certain aspects! I mean really, it’s only the dominant graphical environment for Linux (*nix as well? I’ve only had Linux). We should just be happy and not give our feedback
>>>>>>>>>
We should definately give feedback, but not bitch unless we can certify there’s a problem. With MS, if WinXP is slow, we can go bitch (the GUI is slow, fix it). If on Linux, the GUI is slow, just blaming XFree is a cop-out. You first have to take care to see exactly where the problem is, and then take your problem to the appropriate people. Also, just saying “X is slow, when will it die” is counter productive. A much more useful comment would be “Function XPutImage performs very poorly with the ATI driver, could you take a look at that issue?”
X is dominant
Xfree can be replaced by another X to give you more speed, configurability etc. Its just that most people don’t want to pay for it.
The Xfree people do really good work. but they could do better and they probably could do with some help.
In order, 1 paragraph each —
I’m sorry but I’m an end-user with not much knowledge of XFree86 than the config file and a lot of experience with poor performance.
———–
Does Mandrake or Slackware ship with any of these alternatives? I’m really not up for wrestling with installing by hand something so important.
———–
I don’t mean to piss you guys off, I’m serious about it.
Slackware only comes with Xfree[0] if you wanted one of the other projects you would have to download and install it your self or look for a package at
http://www.linuxpackages.net/
I had this one in my links. They have a Java based X
http://www.jcraft.com/weirdx/
[0] As far as I know. I looked in the FTP extras dir but the packages are not there.
X detractors seem to concentrate on speed of the GUI – that’s bever been a problem for me, since ’92, but I guess I know what I’m doing?
BUT… please also remember the PLATFORM INDEPENDENT, REMOTE DISPLAY CAPABILITY of X, which NOTHING ELSE HAS!
NO, Windows Terminal Server, VNC, PC Anywhere, etc. do NOT substitute for this.
Every app. on your desktop could be running on a different computer, even different KINDS of computers.
This allows very thin, cheap client systems, easy support of centralized applications, COMPLETELY locked down, controlled desktops in environments where that is desirable, and better response than web-based client-server environments.
Nothing has ever come close to X in this respect.
X is definitely not the only client/server GUI, just the most popular. Among the others…
– QNX’s Photon GUI is not only network transparent, it has some pretty spiffy features like the ability to split a client’s display stream to multiple servers. There is a version of the Photon GUI server that runs under windows also, so there’s a small cross-platform aspect.
– VNC isn’t elegant, but it’s the most portable system possible
– Fresco is has an extremely versatile client/server design, using CORBA for everything
– PicoGUI has a consistent and speedy interface even over slow network connections, one effect of implementing most functionality on the server side
I’m sure there are others… but not only have other GUIs come close to X in network transparency, they have gone far beyond it.
Yeah, I’m sure you guys know I’m a Fresco supporter 😛
Now here’s my rant. XFree is good enough for today. Have you guys played Quake and TuxRacer on XFree using nVidia’s drivers: it was fast!
But I don’t think X can hold up that long. It has too many design flaws that could mean suicide in the future (distant, mind you). I believe the future goes to something that does it differently than X11, like Fresco, instead of one that does it like X11 but just fixing some flaws (like speed, semi transperancy).
One for those who think semitransperancy is not important, guess what: It has to do with a lot more than eye candy. Office for example uses alpha blending a lot for the pie, charts and graphs in Excel.
Alex: BUT… please also remember the PLATFORM INDEPENDENT, REMOTE DISPLAY CAPABILITY of X, which NOTHING ELSE HAS!
Probably the most important issue on why Fresco choosed CORBA is the remote display capablity (vector objects on Fresco is different from raster objects of X11, BTW).
Besides, ever considered why remote display capablities have never been an focus of Windows and Mac OS? A LOT of people DON’T need it, and the rest who do are shrinking rapidly.
Besides, ever considered why remote display capablities have never been an focus of Windows and Mac OS? A LOT of people DON’T need it, and the rest who do are shrinking rapidly.
It may be that the majority of users don’t need remote display, but I know I use remote display every day:
– Running monitor apps on my server, displayed on my desktop machine
– Running applications from my desktop machine displayed on my laptop via wireless, to have quick access to software I don’t have installed while away from my room
– Developing embedded systems software, it’s a lot easier to compile on the desktop and display remotely on the embedded system rather than cross compile and upload/flash/whatever every time. This is also helpful for prototyping software in environments that would be too bulky for the target device, things like Python or SQL databases on small handhelds.
But I don’t think X can hold up that long. It has too many design flaws that could mean suicide in the future (distant, mind you).
Please elaborate. I mean no offense but I’m getting pretty tired of people claiming to suffer from design flaws without ever explaining them. You see, if you can throw it in the conversations perhaps we can discuss it and come to some sort of fix. That’s supposed to be the power of Open Source right? BTW, I don’t consider missing alpha-translucancy a design flaw. By the time someone figures out a real use for it X will probably support it (through RENDER or perhaps some kind of OpenGL compositing system a la Quartz Extreme).
I remember someone, not sure it was you, citing in this forum that X days were pretty much numbered as screen resolutions would get higher and higher. The idea was that larger resolutions meant more traffic over the wire. In real life this situations seems to have been reversed. That is, Xfree86 from 10 years ago was much muuch slower than Xfree today at 4 times the resolution. The person seemingly forgot that with time both CPU/GPU power and bandwidth grow (GPU power almost exponentionally). My XFree86/NVidia setup is probably one of the fastest graphics interfaces you can have on ANY desktop today.
I believe the future goes to something that does it differently than X11, like Fresco, instead of one that does it like X11 but just fixing some flaws (like speed, semi transperancy).
So you are willing to throw away everything and start with something that has never proven itself and really has NILL support from the community (compared to XFree). I would say that is rooting for a change just for the sake of the change. Berlin, and as an extension Fresco, have been in the design phase for more than 5 years. I really think its time for an implementations of sorts. I.e. something I can program to. It looks really nice on paper, but that’s pretty much it.
-fooks
It look just like before (as what Eugenia did mention earlier), the argument keep on going… But could sombody enlighten me where to get the update since I can’t find it at xfree86.org.
I don’t mean to downgrade any project but it seem Freesco progress is just tooooo slow. Yeah Mr Rajan_r, you can say that Freesco/Berlin is good in this, that etc. but compared to other project it is just still vapour. Take a look at PicoGUI, Micah make a good progress compared to the age of the project itself. Microwindows/NanoX although look stagnent currently, I am coding with it now. Xfree86 progress is slow too but at least everybody can develop with it whereas Freesco just stil not usable as it is since long ago.
Without progress, project like Cosmoe will be far better than Freesco/Berlin. For me the good is those that can be use either for development or daily works.
One of the big problems with X from a user perspective is that it can be very flickery sometimes… is there a way to enable double buffering in XFree?
Have anybody ever tried OpenGUI? I know their website at http://www.tutok.sk but never give it a try. Looking at the version number, it should have been matured enough.
I just don’t like the screenshot, too classic like DOS era.
One of the big problems with X from a user perspective is that it can be very flickery sometimes… is there a way to enable double buffering in XFree?
Flickering display is usually a problem of the toolkit. Recent versions of Qt and GTK+ (2.0) have pretty much eliminated all the flickering behaviour. Flickering is particularly bad with GTK+ 1.x applications BTW.
-fooks
Have anybody ever tried OpenGUI?
I tried to try it.. couldn’t get it to compile, and the source code is IMHO terribly messy. (Mostly uncommented, several 5000+ line files, most variables and comments in a language I can’t read
From the design though, it looks like it’s your classic C++ widget toolkit layered on top of a graphics library. They even mention that the graphics library has a similar API to the Borland BGI….
As far as I can tell, it only supports one app at a time, and there’s no layout engine for the widgets.
Again, just my opinion and what I gathered without even using it, so if someone else here has used it please correct me.
That’s not the whole problem… Some of the apps I use (Eterm especially) get flickery where it’s obviously the apps’ fault, but if I’m loading down my system with something else and drag a window across galeon, the exposed areas flicker. I thought there was a switch you could enable in X to make it buffer each window, similarly to what OS X does. (I suppose nobody really uses it because it uses too much memory?)
If X is as bad as its’ detractors claim, why is it that major CGI/FX shops are switching to Linux for their animators workstations. I do not know much about animation, but it seems that this application would require high performance graphics. Are they not using X?
That’s not the whole problem… Some of the apps I use (Eterm especially) get flickery where it’s obviously the apps’ fault, but if I’m loading down my system with something else and drag a window across galeon, the exposed areas flicker
That is still an application problem. Many applications don’t bother to optimize their handling of expose events. And if they don’t utilize double buffering i.e. through offscreen rendering you will get flickering. Applications based on gtk+ 2.0 do not flicker (as are applications using the latest Qt toolkit). The toolkit libraries take care of expose event optmimization and offscreen rendering.
-fooks
Does anyone know of some sort of proxy X server that will let me re-connect to a root window or application?
Many times, I find the need to disconnect an X app but keep it running and migrate it to a different X server. From what I know of Xlib it dosen’t lend itself to this particular feature very readily, so it would probably need to be a feature of the X server itself, not the protocol.
Anyone have any thoughts or seen this sort of feature in an X server or a toolkit for that matter? (I’d rather not port my apps to a new toolkit, but it’s still a thought)
Anyone have any thoughts or seen this sort of feature in an X server or a toolkit for that matter? (I’d rather not port my apps to a new toolkit, but it’s still a thought)
The easiest solution is to just setup a VNC server on a box somewhere and run the applications in question on it. You can then connect/disconnect from different X clients. You will of course loose some performance because of VNC but that is the tradeoff. If you work in a LAN enviroment performance should still be very good (less so over slower lines).
-fooks
One of the big problems with X from a user perspective is that it can be very flickery sometimes… is there a way to enable double buffering in XFree?
I don’t mean to pick on this guy as everyone has already mentioned this is probably a GTK issue rather than a X issue and it sounded like an honest mistake. What is important is the rather constant bashing of X by people who can’t seem to keep clear what issues are:
1) X related
2) window manager related
3) graphic library related
4) setting in any of the above related
Issues like bad fonts (X doesn’t meaningfully have any fonts and can support fonts in a way vastly more sophisticated than either Windows or Mac were such fonts available); are another example. I think the people who bash X all the time should bother to figure out if X is even the problem.
I think the people who bash X all the time should bother to figure out if X is even the problem.
Exactly! 🙂
-fooks
If X is as bad as its’ detractors claim, why is it that major CGI/FX shops are switching to Linux for their animators workstations.
They are? Can you show some documented evidence of this? From what I’ve seen they’re only using it on their clusters to run the modeling software.
I do not know much about animation, but it seems that this application would require high performance graphics. Are they not using X?
If they are “using X” they are using it as a platform for OpenGL, in which case X itself is rather inconsequential.
That the largest gfx rendering center ever belongs to the Two Towers movie project, and they are running Linux. Maybe not X, but Linux.
If X is as bad as its’ detractors claim, why is it that major CGI/FX shops are switching to Linux for their animators workstations.
They are? Can you show some documented evidence of this? From what I’ve seen they’re only using it on their clusters to run the modeling software.
http://www.linuxjournal.com/article.php?sid=6011
Also, I read somewhere that Dreamworks is switching to Linux on workstations (they’ve used them to make “Spirit”).
And in “Scooby Doo” (among other works) Rythm & Hues used FilmGimp for compositing:
http://filmgimp.sourceforge.net/
Anyway, it’s true that mostly proprietary applications are used for these tasks, but it’s clearly a step ahead
Please elaborate. I mean no offense but I’m getting pretty tired of people claiming to suffer from design flaws without ever explaining them
Ok, let me see, x-windows uses ip messaging, wich is a whole lot slower than the normal kernel system messaging, that means x-windows has a high latency.
An example: Imagine you’re running a x app, you click you mouse over, say, a button, that’s called and event, events are usually handled through something called a message loop, for far it’s ok, but now x-windows will have to use the tcp/ip protocol to send that event message to the x-server (because your using a client), even when running on a localhost (that’s your computer), that message can of course be broken into packet’s, wich can arrive at any order to server wich then reverses the whole process bla bla bla (i’m getting tired of typing) wich in the end means a very unresponsive gui.
There’s more but i’m tired of typing so:
http://catalog.com/hopkins/unix-haters/x-windows/disaster.html
sorry for the typos abobe & here
Ok, let me see, x-windows uses ip messaging, wich is a whole lot slower than the normal kernel system messaging, that means x-windows has a high latency.
Ehm no, the X protocol only uses IP based messaging if it’s talking to a remote client/server (i.e. over the Internet). When your X client is run on the same machine as the X server they use UNIX sockets for I/O, and sockets, at least under Linux, are bloody fast. The bottleneck is usually just the speed of your videocard, rather than X’s supposedly slow messaging! For images local client can also use the X Shared Memory Extension. That means that both client and server can read/write to the same memory address meaning there is practically no overhead when transferring large data sets (pixmaps, video, etc).
An example: Imagine you’re running a x app, you click you mouse over, say, a button, that’s called and event, events are usually handled through something called a message loop, for far it’s ok, but now x-windows will have to use the tcp/ip protocol to send that event message to the x-server (because your using a client), even when running on a localhost (that’s your computer), that message can of course be broken into packet’s, wich can arrive at any order to server wich then reverses the whole process bla bla bla (i’m getting tired of typing) wich in the end means a very unresponsive gui.
You should better inform yourself since the above is just plain wrong. Local client don’t go through the TCP/IP layer at all! This is different for remote client of course, since they will need to talk over TCP/IP as that’s the only reliable connection between two hosts.
There’s more but i’m tired of typing so:
Yeah, better find some other source for your information. This is what I mean with people blasting X when in reality they actually have no clue!
Oh that page you were referring to is veeery old, I think I remember reading that back in the early 90’s or something, things have changed quite a bit since then. Get with the program 🙂
-fooks
Ehm no, the X protocol only uses IP based messaging if it’s talking to a remote client/server (i.e. over the Internet). When your X client is run on the same machine as the X server they use UNIX sockets for I/O, and sockets, at least under Linux, are bloody fast. The bottleneck is usually just the speed of your videocard, rather than X’s supposedly slow messaging! For images local client can also use the X Shared Memory Extension. That means that both client and server can read/write to the same memory address meaning there is practically no overhead when transferring large data sets (pixmaps, video, etc).
WRONG AF_UNIX socket family like the AF_INET uses the tcp/ip protocol, the ones that bypass tcp/ip are raw sockets (AF_NS i think, but i’m sure), and sorry if i’m wrong (and i might be) but x-windows doesn’t use these, so x-windows is stuck with packets and high latency i’m afraid, of course when i say “high latency” it’s because i’m comparing x-windows messaging latency with for example microsoft windows internal messaging latency, not ms windows tcp/ip latency, linux is quite able to crush windows there
The problem here is that x-windows was never meant to be working as a real-time multimedia gui, and performs very badly as such, despite all the “frankensteinnish” updates to it, it’s like trying to make an elephant run like a chita, when it just was not meant to do so.
The problem here is that x-windows was never meant to be working as a real-time multimedia gui, and performs very badly as such, despite all the “frankensteinnish” updates to it, it’s like trying to make an elephant run like a chita, when it just was not meant to do so.
Even if you consider X’s client/server messaging slow (I disagree with this) that has very little bearing on X’s multimedia quality. Video playback uses shared memory buffers. The only command the X server recieves is to copy that buffer to the screen once per frame. DRI uses a similar approach, where a library on the client side accesses everything as directly as possible.
There have been attempts to make GUIs where everything is accessed directly on the client, or through shared memory. I haven’t seen one of these projects (DinX, DirectFB…) produce anything as flexible and secure as X.
WRONG AF_UNIX socket family like the AF_INET uses the tcp/ip protocol, the ones that bypass tcp/ip are raw sockets (AF_NS i think, but i’m sure), and sorry if i’m wrong (and i might be) but x-windows doesn’t use these, so x-windows is stuck with packets and high latency i’m afraid
Yes indeed, you are wrong. For local connections (client/server on same machine) X uses the most efficient transport mechanism, and that is not TCP/IP. Heck, there are even implementation that use shared memory IPC for the local transport! If you ever run XFree86 on Linux check out the /tmp/.X11-unix directory. You will find a nice
local X0 socket there. That is not a TCPIP socket BTW. Also do “man unix” (7) and read up on Unix sockets. I suspect you are confused. And as a last thing, it is called
“X Window System” or “X11”, not “x-windows”
so x-windows is stuck with packets and high latency i’m afraid, of course when i say “high latency”
Wrong again. And please, show us some numbers next time. You are simply drawing conclusions from wrong facts.
The problem here is that x-windows was never meant to be working as a real-time multimedia gui, and performs very badly as such, despite all the “frankensteinnish” updates to it, it’s like trying to make an elephant run like a chita, when it just was not meant to do so.
I can do multimedia just fine on X, thank you very much. The ultimate multimedia experience are probably those FPS games (Quake, UT, etc) and they typically rule under Linux/XFree86. So please, stop spreading misinformation.
BTW I’m looking forward to running DOOM3 on XFree86 and kicking some Windows butt!
-fooks
fooks: Ehm no, the X protocol only uses IP based messaging if it’s talking to a remote client/server (i.e. over the Internet). When your X client is run on the same machine as the X server they use UNIX sockets for I/O, and sockets, at least under Linux, are bloody fast.
No, they’re not fast, not compared to pipes, SVR4 message queues, and especially not compared to shared memory, which is what X should use by default. Instead, use of shared memory in X is achieved through the MIT shm extension, Unfortunately, this requires application specific support, and virtually nothing makes use of this extension.
The real architectural problems with X stem from its network transparency. The protocol contains a whole slew of proprietary drawing commands. In my opinion, a significantly better approach is to have the applications draw directly into a shared memory buffer, then the display server is only responsible for compositing these buffers into what’s displayed on your screen.
This is the Quartz, where applications draw directly into or write PDF commands to a shared CGContext buffer. Such an approach is infinitely faster.
Micah: Even if you consider X’s client/server messaging slow (I disagree with this) that has very little bearing on X’s multimedia quality. Video playback uses shared memory buffers. The only command the X server recieves is to copy that buffer to the screen once per frame.
Wrong, this uses a hardware overlay, where an application writes directly to the video hardware. This has only become usable in XFree86 in the past two years through the Xv extension. Before that, the only alternative was the horrible DGA extension, which the XFree86 developers rightly disable by default in XF4.
Aren’t you the main PicoGUI developer? What are they teaching you over there at CU anyway? I thought you’d know this… perhaps you haven’t attempted to implement overlay support into PicoGUI yet. At any rate I have my own issues with PicoGUI’s architecture, namely that it doesn’t solve the architectural problems with X that I’ve just stated… it comes off like X rehashed.
Hug0: AF_UNIX socket family like the AF_INET uses the tcp/ip protocol
Perhaps you’re confused by the type/protocol parameters you pass socket() when you create a UNIX domain socket. Indeed you’re probably passing SOCK_STREAM, which with AF_INET sockets makes use of TCP. But no, Unix domain sockets are essentially bidirectional pipes and for obvious reasons don’t require the use of TCP.
fooks: I can do multimedia just fine on X, thank you very much.
I mean, this just isn’t true. You lack an underlying media architecture to really do any sort of media work. Show me a decent linear video editing program which supports a wide variety of codecs that you can run in X.
No, they’re not fast, not compared to pipes, SVR4 message queues, and especially not compared to shared memory, which is what X should use by default.
By default local connections use the fastest transport available. That really depends on the X implementation and the underlying OS. Either way, the local transport is not the bottleneck most of the time.
Instead, use of shared memory in X is achieved through the MIT shm extension, Unfortunately, this requires application specific support, and virtually nothing makes use of this extension.
Popular X toolkits do use it (when told to) and by extensions so do all the applications written for those toolkits (GTK+ 2.0 for example).
I mean, this just isn’t true. You lack an underlying media architecture to really do any sort of media work. Show me a decent linear video editing program which supports a wide variety of codecs that you can run in X.
Try MainActor, http://www.mainactor.com , supports most of the popular CODECS (sans Quicktime).
I personally use Cinelerra ( http://www.heroinewarrior.com ), but it’s a real bitch to compile. On the playback side I use mplayer ( http://www.mplayerhq.hu ) for anything video and AlsaPlayer ( http://alsaplayer.org ) for audio. I also have a copy of Houdini for Linux (a $3000 3D Animation Tool) and it really rocks, wish I had more time to learn it. So somehow I don’t really miss an “underlying media architecture”, whatever that might be. I do however look forward to an OpenML implementation for Linux ( http://www.khronos.org ).
-fooks
Here are some latency results i found, i still think that X11 falls under the AF_UNIX category:
http://www.nsa.gov/selinux/doc/freenix01/node16.html
Aren’t you the main PicoGUI developer? What are they teaching you over there at CU anyway? I thought you’d know this… perhaps you haven’t attempted to implement overlay support into PicoGUI yet. At any rate I have my own issues with PicoGUI’s architecture, namely that it doesn’t solve the architectural problems with X that I’ve just stated… it comes off like X rehashed.
I started PicoGUI a couple years before I started attending CU, I’ve been programming long before I saw anything about computers in school
Obviously you don’t know much about PicoGUI’s architecture…
YES, it’s client/server. BUT, it does this almost completely differently from X. With widgets in the server, there’s no IPC mechanism at all between them and the video driver.
PicoGUI’s architecture is also flexible enough that in the future there will be support for other methods of connecting the client and server. Shared memory message queues (PicoGUI already supports SHM bitmaps) or even plain procedure calls by dlopen()’ing client apps into the server. With this flexibility, various systems can easily make speed/security tradeoffs.
I know the difference between Xvideo and MIT SHM. YUV overlays work pretty much the same in any GUI, so it’s not really worth going into. The example I had in mind were games that do all the rendering themselves then use SHM and XPutImage to blit to the screen. If the game is double-buffered anyway, this incurs almost zero overhead. Not like SHM in X is a new thing either…
Here are some latency results i found, i still think that X11 falls under the AF_UNIX category:
Those figures are microseconds! Also benchmark figures are usually obsolete by the time they’re posted. But lets run with them for now. This link actually proves that the local transport is NOT the bottleneck at all. My guess is that the latency you experience has nothing to do with X perse.
-fooks
No, they’re not fast, not compared to pipes, SVR4 message queues, and especially not compared to shared memory, which is what X should use by default.
>>>>>>>>
Prove it. Or shut up. Actually, I’ll make it easy for you: you’re wrong. http://216.239.51.100/search?q=cache:9S6UP_WsYV8C:
gms.freeshell.org/essays/throughput.html+Linux+UNIX+domain+sockets+thr oughput&hl=en&ie=UTF-8
The tests show that on Linux 2.4, UNIX domain sockets are essentially as fast as pipes, and both are a good bit faster than SVR4 messege queues.
Instead, use of shared memory in X is achieved through the MIT shm extension, Unfortunately, this requires application specific support, and virtually nothing makes use of this extension.
>>>>>>
Umm, KDE has support for MIT-SHM built into its image classes (like KPixmapIO).
The real architectural problems with X stem from its network transparency. The protocol contains a whole slew of proprietary drawing commands. In my opinion, a significantly better approach is to have the applications draw directly into a shared memory buffer, then the display server is only responsible for compositing these buffers into what’s displayed on your screen.
>>>>>>>>>>>
A) That takes up tons of memory (for the off-screen buffers) and worse, blows your caches.
B) That limits you to software rendering. I wrote a highly tweek, very simple imaging library a few years ago. It performed less than half as well as using X calls.
This is the Quartz, where applications draw directly into or write PDF commands to a shared CGContext buffer. Such an approach is infinitely faster.
>>>>>>>>>>
Right. Which explains why Mac OS X has such great performance? Please. The future is hardware acceleration. Due to the inherent latency of accessing an off-CPU device, almost all hardware is designed to execute command buffers rather than be programmed directly. X (with the DRI) handles this model naturally. Quartz doesn’t.
Wrong, this uses a hardware overlay, where an application writes directly to the video hardware.
>>>>>>>
This has only become usable in XFree86 in the past two years through the Xv extension. Before that, the only alternative was the horrible DGA extension, which the XFree86 developers rightly disable by default in XF4.
>>>>>>>
Now that’s just moronic. There is nothing that bad about DGA except for the fact that direct graphics access is inherently dangerous, and these days, mostly useless. The real reason DGA is disabled is because not all drivers in XFree 4.x handle it properly.
I mean, this just isn’t true. You lack an underlying media architecture to really do any sort of media work. Show me a decent linear video editing program which supports a wide variety of codecs that you can run in X.
>>>>>>>>>
You mean like aRts and GStreamer? Exactly what are you missing?
Fine! i give up, your’re right, using a client/server model is great, it doesn’t matter that you’ll probably only need to have one client for desktop use.
No having a server running for just 1 client isn’t a complete waste.
Prove it. Or shut up. Actually, I’ll make it easy for you: you’re wrong. The tests show that on Linux 2.4, UNIX domain sockets are essentially as fast as pipes, and both are a good bit faster than SVR4 messege queues.
LOL, hi there Linux zealot. Nice attitude, are you taking an attack on X personally? It’s also nice you assumed I was talking about Linux. Unfortunately, I wasn’t. Benchmark SVR4 message queues on an OS I care about, like Solaris, Irix, or FreeBSD. I think you’ll find things are somewhat different.
A) That takes up tons of memory (for the off-screen buffers) and worse, blows your caches.
And the shm extension doesn’t? Ever look at how much memory your X server is using?
That limits you to software rendering.
Uhh, as opposed to what? Your alternatives in X are writing to a socket or to a shared memory buffer. If you’re using a Quartz renderer you are also writing to a shared memory buffer, except it’s one which can be immediately composited by the WindowServer and displayed.
I wrote a highly tweek, very simple imaging library a few years ago. It performed less than half as well as using X calls.
And of course you were using a Quartz renderer, right?
You mean like aRts and GStreamer? Exactly what are you missing?
gstreamer is the only thing close to what I want, and has heinous performance issues and is likewise buggy as hell. I’d rather have usable, developed software like Quicktime or DirectShow. Show me a decent non-linear video editor for your OS of choice that isn’t OS X or Windows, then let’s look at how infinitely unusable it is compared to Final Cut Pro or Premiere.
LOL, hi there Linux zealot. Nice attitude, are you taking an attack on X personally? It’s also nice you assumed I was talking about Linux. Unfortunately, I wasn’t. Benchmark SVR4 message queues on an OS I care about, like Solaris, Irix, or FreeBSD. I think you’ll find things are somewhat different.
Yup, and slowness on Solaris is reason why people b&m about X’s slowness on Linux, right?
And the shm extension doesn’t? Ever look at how much memory your X server is using?
Remember that X mmaps videocard’s memory, so if your videocard has 64MB or 128MB RAM, it will show up somewhere, right?
Do you remember, that what kind of machines was X’s target, when originally designed?
Uhh, as opposed to what? Your alternatives in X are writing to a socket or to a shared memory buffer. If you’re using a Quartz renderer you are also writing to a shared memory buffer, except it’s one which can be immediately composited by the WindowServer and displayed.
Videocards since windows 3.1 have “2D acceleration”. It means, that you can draw graphics primitives in hardware. Believe or not, X servers use this. So your program does not turn pixels on and off in framebuffer when drawing line (like Quartz does), but tells the card to draw line, circle, whatever, solid, dashed etc.
Arguing that X is slow and showing Quartz as oppossing example is bad idea. Try running X and Quartz on the same machine, do some benchmarks and then try again…
gstreamer is the only thing close to what I want, and has heinous performance issues and is likewise buggy as hell. I’d rather have usable, developed software like Quicktime or DirectShow. Show me a decent non-linear video editor for your OS of choice that isn’t OS X or Windows, then let’s look at how infinitely unusable it is compared to Final Cut Pro or Premiere.
I agree, gstreamer is buggy as hell. But multimedia architecture doesn’t have anything with original claim “X sucks, without reason”. However, I’m willing to wait and have multimedia architecture with sane api, as oppossed to DirectShow or Quicktime.
Also note, that windows-based NLEs usually use vfw or quicktime, not directshow. DS is playback-oriented architecture, with editing services thrown as afterthought.