Wayland 1.0 was officialy released on October 22. Wayland is a protocol for a compositor to talk to its clients as well as a C library implementation of that protocol. The compositor can be a standalone display server running on Linux kernel modesetting and evdev input devices, an X application, or a wayland client itself. The clients can be traditional applications, X servers (rootless or fullscreen) or other display servers.
Look out, WP7.5! Now that we’re finally putting X behind us and have a modern compositing stack, there’s new kid on the block, and his name is “Year of Ubuntu with PPA repos on teh Desktop”.
This is gonna be big. I can feel it.
not sure if trolling or stupid.
Nether. It’s (trying to be) humorous.
Trying being the operative word.
has nobody heard of Poe’s law?
My point is that Wayland is irrelevant. It has been cited so often as the solution to _____, but in a world where Android is the biggest consumer Linux distro, the problems that Wayland is meant to solve have already been circumnavigated. NB: not solved, but circumnavigated. Wayland is indeed solving problems, but I doubt the solutions’ meaningfulness at this point.
Moreover, because other parts of the Linux desktop ecosystem are broken–er, ahem, I mean “free and decentralized”–once those technical problems are solved by Wayland you still won’t be any closer to “Year of Linux on the Desktop” (which is an irrelevant dream at this point anyway, as I’ve noted) because what you’ll actually get is “One Year of Ubuntu PPA’s and Unofficial Slackbuild Packages”.
Regarding trolling or stupid, hints for next time: “killar”, “WP7.5”, “teh”.
Edited 2012-10-24 14:52 UTC
Most of the criticism by Miguel are valid.
The thing I find infuriating is that no Linux fan will take any criticism about their platform.
Edited 2012-10-24 17:05 UTC
lucas_maximus,
“Most of the criticism by Miguel are valid.”
What you talkin about Willis?
“The thing I find infuriating is that no Linux fan will take any criticism about their platform.”
Exaggerated isn’t it?
I’ve been pushing linux solutions, but even so I’ve got plenty of gripes with it. And as often as not I have some problems with linus’s direction for linux. When I was a pure windows user, I said the same thing about Bill Gates. The same would go for Steve Jobs. It’s just a fact of life, those in charge don’t always do the right thing.
I’ve never bought into the platform-as-a-religion philosophy, but clearly some have on all sides…maybe I am an outlier?
Edit: If you reword the sentence to the following, it’d be less exaggerated:
“The thing I find infuriating is that some Linux fans won’t take any criticism about their platform.”
The thing to note though is that it applies just as much to windows and apple fans too.
Edited 2012-10-24 17:56 UTC
No because the GPL/GNU is almost a religion now.
Read “I am not a gadget” … interesting stuff from a guy that used to live with Stallman.
When breaking backwards compatibility and fragmentation is a feature of the platform … I can’t support that as a software engineer.
As for pushing Linux solutions … well lets say you shouldn’t be employed.
Edited 2012-10-24 21:43 UTC
lucas_maximus,
“No because the GPL/GNU is almost a religion now.”
Surely no more than unadulterated capitalism is. Unless one is too religious about it to see things for what they are, then we should all accept there are some truths in all philosophies. One should draw on all things in moderation.
OS fanatics exist on all sides, they’re fighting something of an OS turf war here on osnews – something which most of us find annoying as hell. Still, it’s no reason to overgeneralise.
“As for pushing Linux solutions … well lets say you shouldn’t be employed.”
You do know you are trolling, right? Of course you do, you are obviously trying to fuel the flame, but what’s the point?
If you push only Linux solutions then you shouldn’t have the job .. if you can’t see why this statement is correct you shouldn’t be near a computer.
lucas_maximus,
“If you push ONLY Linux solutions then you shouldn’t have the job .. if you can’t see why this statement is correct you shouldn’t be near a computer.” (emphasis mine)
Moving the goalposts subtly. I’m glad this statement is less extreme than before. Surely you’d say the same thing about pushing only windows solutions right?
Yep. I would push the best solution.
Edited 2012-10-25 16:47 UTC
I assume you’re referring to this? http://tirania.org/blog/archive/2012/Aug-29.html
You do realize that my entire post was a sarcastic criticism of Linux on the desktop? I am also a Linux user.
Edit: added link
Edited 2012-10-24 19:31 UTC
I do realise that it was a joke now, but poe’s law is relevant. I have heard so many extremist idiots … I honestly don’t know what is a joke now or not without a smiley.
Edited 2012-10-24 21:41 UTC
That’s quite a generalization, isn’t it? You could also apply it to fans of any platform: Mac, Windows, Android, etc. All of them have their share of people acting irrationally.
Downvoted? Seriously? For suggesting that the way Wayland will be integrated into distros will probably be haphazard and that even after integration is complete and smooth that it will be largely irrelevant?
It is OSNEWS … aka Neo-slashdot.
Actually having any sort of opinion on here will be against the GNU gods or something and will get a downvote.
Get used to it.
To be fair, X11 was “never” the problem. The good thing about Wayland is that it gives us a clean slate to work with and to experiment with. X11 will never go away, we will still be able to use it as a Wayland extension, so everything should work out OK in the end.
Except the clunky architecture for doing high performance graphics rendering.
Which was solved by the DRI2 extension..
As a matter of fact, Wayland is very much like the DRI2 extension, which is not surprising given that it’s the same author.
DRI2 was just a better hack, but still an hack.
You can enjoy the discussion here, from the XDC 2012
http://www.phoronix.com/scan.php?page=news_item&px=MTE5MTY
Sorry, are you saying Wayland’s a hack too or just DRI2 I wasn’t sure.
I will read that phoronix link now, but not sure if it’ll all go over my head. cheers
I mean that DRI2 was an improvement, but still there are quite a few things Windows and Mac OS X (iOS) are way better because of the way the hardware is made available to the drivers.
DRI2 still has quite a few issues when trying to push the graphics hardware.
That is why a kind of DRI3 is being considered.
I am all for Wayland.
Now that I’ve seen they’re aiming for network transparency in some form I would really like to try it out when it starts hitting repos.
As long as I can ssh a nautilus window from home at work I’ll give wayland a go.
While X is better than it was when I started using Linux, it doesn’t get enough money/attention to keep up.
Hopefully wayland can keep it simple.
Heh. Wayland is mostly likely going to be as widely accepted as Gnome3, given the fact that pretty much the same bunch of morons seem to be behind both efforts.
Edited 2012-10-23 23:40 UTC
Well, Gnome3 wasn’t a change that anyone really wanted. Wayland aims at fixing something everyone knows needs fixing. That plus its being aimed at Fedora and eventually RHEL. Others can choose not to use it, but they will be giving up a lot of compatibility to do so. Apps will start to be ported over to Wayland support, making anyone without Wayland a dying breed. Unless of course it sucks. Then it just won’t go anywhere.
In the name of fairness, I should point out that I was a happy KDE 3.5 user, couldn’t stand the bugs and sluggishness in KDE 4.2 through 4.5 and switched to LXDE when the GNOME guys finally fixed the performance issues in the GTK+ Open/Save dialog. (for folders with many files in them)
As a former Gentoo user, I don’t think in terms of “THE Linux Desktop Environment” and I think about GNOME 3 about as often as I do CDE.
When I talk about the Linux desktop regressing, I’m speaking purely about how client-side window decorations are to window management as cooperative multi-tasking is to concurrency. (Useful when done properly… but not even NASA can guarantee a bug-free program.)
Edited 2012-10-24 02:38 UTC
I see you havent heard about Kaspersky
Reminds me of the crap research I had to sit through where someone was trying to convince everyone you can mathematically prove a program and make it bug free.
What a gigantic load of crap.
-Hack
Err well that is called Formal Specification.
http://en.wikipedia.org/wiki/Formal_methods
In some ways it is similar to unit testing except much more rigorous.
Edited 2012-10-24 10:20 UTC
hackus,
“Reminds me of the crap research I had to sit through where someone was trying to convince everyone you can mathematically prove a program and make it bug free.
What a gigantic load of crap.”
Not at all, if a program has a limited number of states, and each one of those states can be proven to do what is intended under all possible conditions, then you’ve got yourself a bug free program.
There’s one huge caveat, a program has no choice but to assume its foundation (language and operating system) is bug free and deterministic. Alas, sometimes operating systems do break promises, in which case even “bug free” programs can behave unexpectedly.
Here’s an example:
void*a=malloc(1000000);
if (a==null) exit(1); // exit predictably rather than crash
memset(a,0,1000000); // segfault?
This code can crash under linux because linux will allocate virtual memory addresses without ensuring there are sufficient physical/virtual memory pages to back them. To my knowledge, linux is fairly unique among unix implementations to fault during memory access rather than fail gracefully at malloc.
http://linuxdevcenter.com/pub/a/linux/2006/11/30/linux-out-of-memor…
It’s possible to use unconventional methods to preallocate real memory from the heap, but there’s the whole issue of OS memory overcommitting. When this happens, two instances of a process can use shared copy-on-write memory, but the OS has only reserved enough ram for one copy. In the event that one of the processes has to change a shared page, it is necessary to clone the page, but that can fail.
char a[]=”This is a very long non-const test string…”; // global variable might initially be loaded into shared memory pages.
void f() {
a[0]++; // segfault?
}
I guess I kind of proved your point for you
I wanted to emphasise that mathematical correct programming is possible, but it requires similar mathematical correctness from the OS/language in order to be effectively correct. At the very extreme, one might argue that our hardware itself is probabilistic in nature, but that still doesn’t dismiss the mathematical correctness of a *software* system in principal.
TC++PL, 5.2.2: “The type of a string literal is ‘array of the appropriate number of const characters’” and “A string literal can be assigned to a char*. This is allowed because in previous definitions of C and C++, the type of a string literal was char*. Allowing the assignment of a string literal to a char* ensures that millions of lines of C and C++ remain valid. It is, however, an error to try to modify a string literal through such a pointer.”
So, undefined behavior in C++, segfault if your compiler puts string data in readonly memory (true for Intel’s C++ compiler and GCC, but not MSVC).
EDIT: need more coffee – your code is OK since it copies the the string literal into the non-const ‘a’ char array and assigns a pointer to that, rather than directly to the string literal.
Edited 2012-10-28 16:16 UTC
KDE 4.9 will look and feel a ton better. I gave it a try with the Alpha of Fedora 18, and I love it.
Agreed. With network transparency on the plans, I’m cautiously optimistic but I’ll wait until I can see how good a job KWin does of ignoring clients’ attempts to use client-side window decorations.
I’ve seen far too much pain and suffering from Windows, where a busy app can block itself from being moved, resized, refocused, or minimized.
As is, it feels as if Linux might be the only platform REgressing rather than PROgressing when it comes to not trusting applications to be perfect.
Edited 2012-10-24 01:21 UTC
Weston (like Windows now) will ping the application and if it doesn’t answer will takeover the window, which will allow you to still manage the window.
It’s an either/or situation, both have advantages and drawback:
-client side decoration: looks better but can be a problem if the application is blocked and resizing can be “jerky” if the application is slow.
-server side decoration: resizing the window is smooth but the content of the window can be ugly if the application is slow, less pretty for transformed windows.
I never really minded the desync between window borders and window contents. I considered it a necessary (and possibly even desirable) alternative to the risk of jerky resizing.
(For similar reasons to why I prefer tactile switches on a keyboard when feasible. User feedback must NEVER desync to a degree the user notices, even if that means ugliness. On a keyboard, desync means the user never builds the habit of trusting their muscle memory and must visually double-check their actions, resulting in slower interactions. With a mouse, desync means waiting to see if the system will “catch up” beyond what they intended, rather than just committing to the action as it looks at that instant.)
Hence why I’ll be giving KDE 4 (or at least KWin) another chance unless I see another featureful WM promise Weston support with the option to force server-side decorations.
Let’s just hope that someone will be able to patch whatever client-side decoration system wins out so that it can do the “use D-Bus to request WinDeco widgets from the WM” thing if it needs them.
(As with Canonical’s AppIndicators, I think the idea of using D-Bus to have the WM draw in-border widgets is a good one and a big step forward for UI consistency… I just refuse to use apps which only support libindicator because I insist that left-click toggle application visibility and right-click show a context menu on my tray icons.)
Edited 2012-10-25 04:36 UTC
Ugh. You know you’re posting too late at night when you conflate a protocol with its reference implementation.
“…promise Wayland support…”
Agreed, then again IMHO what follows CSD is a BeOS-like design to have one thread of each application dedicated to the GUI, so that you can be reasonably sure that resizing will be smooth.
This is unlikely that this’ll happen on Linux as it would be a big change but who knows?
the compositor is implemented to forward input events to the focused application (but before doing so owns them and can act on them arbitrarily)…
thus we could bypass heuristics and let the application normally operate on input events, but have a key combination (CTRL+ALT+… or anything you want) make the compositor kick in anyway and handle the button press on its own, without forwarding it
protocols (ping/replies) to detect busy applications seems a bit of overdesign…
as do describing the titlebar and the button position – since the compositor already has the last word wrt what is drawn on screen, and since an application’s decoration to him is just another surface to composite on screen,
it could decide whether to draw the application’s decoration (better, let the application draw it on its own), draw its own (which may amount to some window management buttons in a small box recognizably belonging to the compositor) on top of the application, or even replace the application drawn one – what’s really needed is flagging a surface belonging to some app, as its decoration
one could argue that this leads to visual incosistencies and ugliness – not if the compositor and aplpication use the exact same decoration rendering, eg via a shared library
in that case, the compositor may draw its own decoration (or decoration part, eg the titlebar and buttons) without it being distinguishable from the normally app-drawn one..
it doesnt have to be either/or, imho…
Edited 2012-10-25 13:57 UTC
I’m not sure if I totally get what you describe for resizing.. So I’ll try to rephrase it: currently the application provide one buffer for the complete window (decoration plus content), you’re suggesting that the application sends two buffer one for the inside and one for the decoration, so that if the server detect that the client isn’t answering fast enough it can do itself the resizing?
I don’t think that it would work very well..
only one surface for the whole window?
correct me if i’m wrong, but i got it that surfaces are fully redrawn in response to events they are the target for
and every visual object gets assigned its own surface so as to avoid redrawing everything and all (title-/menu-/scroll-/tool-bars, widgets) whenever something happens somewhere else (especially in the actual work area, eg the canvas in a paint program)
which seemed quite inefficient..
that’s why i was thinking that the compositor could do anything including replace the application’s decoration on the fly – never mind…
but still, it can superimpose its own (or at least a box with its own control widgets) aligned to the app’s window, without requiring a titlebar description protocol…
rather than suggesting, i was more like expecting it to be so (actually not just two, but possibly one surface per widget or – for “frame” like decorations – frame edge)
moreover, since i thought the point of compositing was to leverage the gpu’s hw rendering capabilities, and surfaces are actually handles to objects in gpu memory (thus shareable at hw level), the application wouldnt “send” anything (at least not to the compositor, but to the gpu, and only when the surface needs redraw)
and, even when the decoration is affected, only that would need to be “sent”
the point was that the server shouldnt try to “detect” anything
not by pinging the application and using round trips, anyway – one could reply to a ping from the compositor in a separate thread, and the application would appear responsive even though the gui loop is busy…
giving the user the possibility to toggle to an alternate mode in which control (minimize, close etc) buttons are provided by the compositor, at any time, would be much more useful than any heuristic…
I’m not optimistic about network transparency in Wayland unless it coincides with a major push to bring network transparency to D-Bus. In fact, one could argue that Wayland should be built on top of a network-transparent message bus.
The client-server paradigm is the original sin of UNIX and everything/everybody that was influenced by it. Clients often want to talk to one another. Servers often want to talk to one another. Servers often want to make requests on clients. In modern software systems, the distinction between clients and servers often breaks down or creates artificial limitations.
This is why D-Bus (or COM in the Windows world) has become such an important architectural element of modern operating systems. Of course, it was implemented in userspace, and it wasn’t until years later that anyone even thought to suggest that maybe the Linux kernel should have a native pub-sub socket type — so ingrained is the client-server mindset.
We live in a peer-to-peer world, with peer-to-peer programming models and a fantastic peer-to-peer network architecture, and the extent to which we shoehorn this all into the client-server paradigm is shameful.
Just look at how web frameworks have evolved as crude and awkward bastardizations of the MVC pattern, which doesn’t really translate from object-oriented programming to a client-server protocol like HTTP.
In both X11 and Wayland, clients aren’t really pure clients, because they respond to requests from the server to handle input and exposure events. So what we’re really talking about are applications of the generalized peer-to-peer message bus.
We need a first-class, network-transparent, pub-sub, multicast socket implementation in the kernel. D-Bus would be a thin abstraction of that, and since the kernel implementation would replace two expensive copy operations with a cheap page table mapping, Wayland could ride on the D-Bus without performance issues, gaining network transparency as a matter of course.
So, Plan9 ideas to the rescue?
You do realize we’ve had that *FOR SOME TIME* like well more than 20 years.
I’ve been using XDMCP and “willing to manage” super duty centralized machines, doing the heavy lifting of hard work and using local graphics for Display… since 1995 for myself.
I mean, why have a big honking machine producing LOTS of heat sitting under your desk… when it can be sitting in a Climate Controlled Server Room… and displaying/sending all graphical events (along with sound) to my local desktop?
I mean, I’ve done it with OSF1, SunOS (pre-Solaris), HP-UX, AIX… of course Linux, *BSDs.
You realize that the centralized machine is actually a “client” of the local Desktop Graphical Server.
X was never/is not the problem, just people’s perception of it.
I mean, heck, I’ve done server installs of Linux in the server room… do the rest of the install at my desk once I boot it properly. This being in 1998 when “Windows” stuff like that was IMPOSSIBLE, without third party consoles or other addon.
Heck, Direct Remote Administration really wasn’t on Microsoft’s radar until late 1999.
Network transparency… with X, we’ve had it for *YEARS*, probably decades.
We know.
Wayland doesn’t do it so to replace x it well need it.
I have never really had any issues with X, it has for the most part served me well and developers like Keith Packard helped to push it forward immensely, credit where its due.
I think however its about time X was put out to pasture, if only to get rid of the negativity that was piled on it, no matter how much work the X.org developers did or how much they improved it, it was always going to past negativity piled on it.
By replacing it with wayland hopefully most of the “haters” can find something better to do with their time.
What about those of us that irrationally hate Wayland, where will we be left?
well X is hardly going away, it’s just going to stop being the first level. For those that still want X you can still install it and use it as a base, in fact that’ll probably be the recommended for the next year at least.
find a hobby ?
Personally may I recommend a nice holiday away from the hate in places like this:
http://www.cheapcruises.co.uk/images/News/1013-hawaii-dance.jpg
http://upload.wikimedia.org/wikipedia/commons/thumb/2/2d/Cleaning_s…
http://www.willgoto.com/images/Size3/French_Polynesia_Bora_Bora_pic…
http://2.bp.blogspot.com/_9SoExW8GDCk/TAIVvA6pzSI/AAAAAAAAAGY/jp1FI…
http://www.weddings-abroad.com/images/borabora_lemeridien_ariel_01….
http://www.weddings-abroad.com/images/borabora_nuiresort.jpg
I really need a holiday ahh this planet is just so beautiful.
As users, we don’t care about how “beatifull” an archtecture is or not, as long as it works. But if it doesn’t work, we users will hate. X.org breaks compatibility with GPU drivers and binary compatibility with existing apps with every upgrade, so we will whine.
Edited 2012-10-24 11:42 UTC
You, maybe not. I have, and I still remember how painful it was to properly set up the X Server. I worked on one huge project that was killed by this precise issue.
X IS antiquated, no matter how hard KP tried to give new life to it. The good part with Wayland is that the good things in X are kept, and the bad things will cost the right amount (so that people stop using it).
Maybe it is a good time to push X completely to the user space (over different native toolkits, win32, haiku wayland, quartz and why not Plan 9) and use as a fallback mode an abstraction layer like SDL.
Just my opinion.
I was a systems integration agent with Trolltech for 4 years. Every embedded device we got access to, with the exception of the Nvidia Tegra 2, had a completely shit accelerated GLES2 stack under X11. We had some vendors that will go unnamed, who promised us a competitive X11 drivers a couple months in advance, only to fail to deliver them for years. They were delivering drivers, they just performed revoltingly in comparison to their framebuffer fullscreen EGL compatriots.
In Qt, we had one multi-surface multiprocess driver for powervr hardware. The code was a nightmare to read, required tweaking on every powervr licensed device and was the closest thing to a production ready driver we had. People would be tied to Qt, QWS and powervr. This was a major blocking point, and on the consultancy projects we were involved in, a consistent point of failure.
Contrast this to Wayland, which one of our engineers integrated with the Qt Wayland stack in one evening. Any additional work to the common Qt Wayland stack benefits every device consuming it.
Nope, if you are trying to ship a device and don’t need the Android marketplace, you can now ship something more professional on lower end hardware. This is relevant for IVI, set top boxes and other dedicated hardware platforms.
IVI? And… care to share what curious hardware runs Qt embedded under the hood?
Everything that I have been hearing about Wayland is that is going to rock the butts off with dead light fast rendering and every desktop being pixel perfect. If this is really true, then desktop linux just have the potential to rise again (X is definitely a load of crap, specially when you have to kill it).
Come on, desktop devs. Start working your butts off right now!
Then either you failed to listen or the one you heard was misinformed: Wayland ~= X.Org DRI2 extension, so if you use a toolkit which use the DRI2 extension the performance should stay the same (more or less: Wayland integrates the compositor and display in the same process: less IPC in Wayland)
With Weston, resizing Window should be “pixed perfect” (but resizing can be jerky), with KDE’s Wayland server this is the opposite: resizing will be smooth but the content of the window can be “ugly”.
As for the irrational criticisms of X: many times the blame has been on X even though the issue was with drivers, Wayland won’t fix drivers issue.
Bull crap. X11 is a protocol. It’s a colossal fossilized terd from the late Triassic period of computing. You can dress it up all you want with fancy rendering extensions and the worlds best drivers, but the latency will still suck. A monolithic nightmare designed from a time before shared memory and dynamic linking has no business on a modern system.
Thanks for your so well researched arguments, I think that we couldn’t have lived without them.
What was I saying ‘irrational criticisms of X’?
Yet another example..
Wow.
X11 is indeed a protocol.
You’ve mixed a TON of criticisms… that stem from implementations of CDE and old ways of handling libraries and memory management that have pretty much gone away in any Modern Build of X… (or X.org) that uses any of the recent re-factored source codes. Compilers now a day are a lot smarter than you obviously know.
Sorry you don’t have a relative clue.
So, how long have you been actually *USING* X?
Edited 2012-10-26 15:23 UTC