What else would we talk about other than the massively [popular|controversial] article about X.org last week. We try and address a number of concerns about the article and common lines of reasoning / misunderstanding. Lastly, we move onto something completely different with topics on Google Chrome on Linux, IE6 and the two details we know about RockMelt: Rock. Melt.
Here’s how the audio file breaks down:
0:00:30 | Intro & reader comments |
---|---|
0:13:10 | “X†|
0:48:15 | “Chrome, IE6, RockMelt†|
1:13:38 | Meta |
1:15:42 | (Total Time) |
Download .mp3 |
Subscribe in iTunes |
Subscribe RSS
The intro / intermission and outro music is a Commodore 64 remix “Turrican 2 – The Final Fight†by Daree Rock.
We genuinely hope that you enjoy the show, and that we’ve managed to bring up original points in our discussion. Do follow up what you picked up on in your comments!
We are always open to your feedback. Please either leave your comments on the site, or send us an email to [email protected].
Wow, you guys are really full of yourselves.
Feel free to not download the episode and not listen to it, then.
Since original article was complete bull… that is probably sanest option.
Thom was more or less bullshiting about his unstable hardware, not X. Any even a little technical person will conclude the same on every OS when user has complaints like that. I would even go as far as to bet that he uses proprietary NVidia driver and then blames free software.
About restoring the state on crash? THERE SHOULDN’T BE CRASH AT ALL… EVER… and in that case… who cares about restore (I don’t… for 5 years not even one X/kernel crash for me).
About Thom…
He is more than occasionaly quite interesting to read… If you exclude times when he starts thinking he is a technical person (it would be time for him to finally decide on which pill he will swallow, if only he will realize the truth after… HE IS NOT TECHNICAL PERSON).
Strongly agree. And exactly what I’m doing. Pointless editorial begets pointless podcast. Pointless film at 11.
Edited 2009-08-20 22:17 UTC
So, the original article as bullshit? so, the answers to the following two questions are “no” in your world?
Does X, or does it not, take down all the application an data within them when it crashes?
Does X, or does it not, crash when a video driver misbehaves?
If you answer these questions with “yes”, then you agree with the original article. If not, then you are using a special version of X that is better than what the rest of the world is using.
Chant, chant, chant, Thom. You are still chanting that mantra. You shouldn’t have included, even featured, so many unrelated troll points, and unrestrained overstatements, if you had intended your all-over-the-board troll-piece^Weditorial to make a single point well. This is what I meant in the previous thread when I said that it was poorly written. You could have written it differently, in a more focused, intelligent, and less (intentionally) provocative way, and a much different family of threads would have resulted. But you didn’t. As it stands… you’re reaping what you sewed, Thom.
And also as it stands, the statement quoted above is quite false. Think about it.
Edited 2009-08-20 23:27 UTC
So, the original article as bullshit? so, the answers to the following two questions are “no” in your world?
yes, you hit the nail on its head just the same as reporter humping over the fact that lamborghini is bad for mud races.
Does X, or does it not, take down all the application an data within them when it crashes?
read the large print… DRIVER SHOULD NOT CRASH… EVER… And if you use suggested hardware… it doesn’t… it is what all of us are telling you. btw. you didn’t answer on my bet claim. Do you use NVidia closed driver?
And answer is… no. it can’t. if you’d be at least 1% technical as you are pompous about it you’d know that for your self.
It can’t crash networked gui application. It can’t crash socket gui app which is running as underlaying service. It can’t lose data if software is storing its history, and if session is saved… it will be restored
A lot of X11R6 is network based, trouble is not so many software uses that option. But that is not X problem.
Does X, or does it not, crash when a video driver misbehaves?
As most of us said. X doesn’t crash, unless you use bad hardware. It hasn’t crashed for me now for 5 years and being developer my self I suspect I trigger a lot more buggy events than you.
If you answer these questions with “yes”, then you agree with the original article. If not, then you are using a special version of X that is better than what the rest of the world is using.
lol, spoken with words of angry 3 year old child who thinks he knows what computer is.
Please, tell us about your hardware. You avoid this like a plague… I wonder why…
update…
btw. if you want to sound clever don’t use excluding and conditioned questions.
If I translate your claims we get this.
1) In case of global nuclear war, humanity would extinct as there is no possibility to survive such radiation extents.
2) As further effect, earth would be too polluted for humanity to live on it.
2 is stupid, humanity is already extinct… they don’t need earth anymore. while if any human did survive… 2 is wrong by default.
and since answer to your 1st question is no… second is invalidated by the fact
Edited 2009-08-21 11:56 UTC
Yes, and people shouldn’t steal, yet we still put locks on our doors.
Thom, just a question: do you plan to go the way the dutch site http://www.geenstijl.nl is pretending to be journalistic?
q]Yes, and people shouldn’t steal, yet we still put locks on our doors. [/q]
It would be a shame, a loss of potential that is there.
lol, exactly as I was expecting… can’t handle technical so lets go stupid
I can’t believe you’d try to pass yourself of as an experienced software developer while making the claim that crashes should never happen (and, therefore, it’s not your responsibility to defend against them). I also can’t believe you’d argue that users should be held to an extremely narrow set of software/hardware configurations, in the name of not exposing fatal software/driver bugs. That is insane. Any real, remotely competent, remotely experienced developer knows that errors happen, and that any software they produce (that they expect people to want and use) must be able to cope with them.
I can’t believe you’d try to pass yourself of as an experienced software developer while making the claim that crashes should never happen (and, therefore, it’s not your responsibility to defend against them).
and I hardly believe you’d consider your self smart enough to post answer in public.
you would benefit by learning how to read to the end of the comment. but then again twisting one sentence is so much easier than whole.
I also can’t believe you’d argue that users should be held to an extremely narrow set of software/hardware configurations,
Then… Apple doesn’t exist? Its only reason for stability is HCL. They define it and they also provide machines according to it. Same goes for FOSS, there is better supported hardware and not so good supported or unsupported hardware. By using hardware where developers:
a) didn’t write driver (NVidia)
b) present driver troubles in HCL
you can’t find bugs, you can only be annoying. They already know it isn’t working as it should. And as for NVidia s_hit… well, they can’t do anything.
By using hardware like that you could… well… hit your self with hammer on the head and still not believe it hurts.
in the name of not exposing fatal software/driver bugs. That is insane. Any real, remotely competent, remotely experienced developer knows that errors happen, and that any software they produce (that they expect people to want and use) must be able to cope with them.
lol, if you’d only learn to read to the end.
Edited 2009-08-23 12:30 UTC
In my experience nVidia is the only card that has never given me problems beyond trying to find the driver. my intel open source driven card has problems and I hear ATI is worse.
Arguing that handling crashes is unneeded as there shouldn’t be any is like saying we do not need a fire service because houses should not burn down.
There are problems with the current graphical stack, even when using only open source software. The important question is how do we improve it.
I wasn’t arguing that handling crashes is not needed. I was arguing that X is the wrong place for that. X provides very good framework for that already, shame… almost no software uses it.
Ati drivers are still in development while Intel… those drivers are going trough a lot of changes just to provide exactly what you are asking for.
If only people would read plans…
My thoughts exactly. This was painful to listen to. Listen to Java Posse for a podcast the focuses on more useful issues.
In my opinion you have forgotten a crucial little aspect before you started flaming.
Linux and also X are free and open source software and hence without cost. So who are you flaming at? People that invest a lot of there free time to give the best of what they can do to the community. It needs community-sense to understand that.
Stick to Apple Thom it fits better to you.
Jan
what you say is true. But there is no denying that X is seriously starved of developers. In general open source infrastructure is seriously starved for developers.
The vast majority of developers of low level things like the kernel and toolkits are supported by corporate entities if not hired directly work on these things.
I do however think that X is an anomaly as it seems totally underfunded in terms of money and in terms of corporate and developer support for such a vital piece of infrastructure
The problem with that statement is that the open-source community is always very adamant about it producing better code and better products than closed source model. However – whenever there’s an issue, it’s suddenly all “they’re doing it for free” and “you have no right to demand quality”. I’m sorry, but that’s just not fair.
The open source community produces some damn fine software, but it’s not really fair to shout the good products off the rooftops, and then hide behind a lack of money when something doesn’t work as well.
Ah, that’s hilarious on so many levels . Write a negative story about Apple, and I’m anti-Apple. Write a negative story about X, and I’m pro-Apple.
Proves once again how people perceive the world: through the glasses of the products they prefer.
What I think is more ironic is how Linux advocates will try and put their OS of choice up against commercial operating systems like Windows and Mac OS X but when you hold their operating system to the same standard as those commercial operating systems they seem to then turn around and plead that it is unfair.
It is as though they (advocates) want the cake and eat it too – they want to compete but not held to the same standards.
I would call myself a Linux advocate, I definitely think that Linux has some serious strong points, and I would say that Linux is far-and-away the best choice for some users and some jobs. But I’d never try to claim that it doesn’t have its down-sides and weaknesses too, or that it’s a perfect solution to every problem. Some Linux advocates are not all Linux advocates or most Linux advocates: many serious Linux users will admit that both the X server and the sound system are areas where there is serious progress to make yet.
I use Linux extensively both at work and on my personal computers — including my lap-top, where I even use Linux for gaming (SoulStorm in WINE . But I’ve definitely had my share of serious, install-wrecking problems, the vast majority of which are either X or sound related. And I’ll definitely tell you, Linux is not ready for the Desktop now, and will not be, until there are serious improvements to the way we handle desktop management, sound management, and driver installation (it’s actually quite annoying to have to go re-compile all your third-party drivers every time the kernel is patched).
Clear, you know how to write an article, but reading seems still a little problematical.
I have written here a few times more, I am not pro or against any OS nor am I a fanboy of one or the other.
What I would prefer is VMS but that is beyond the point.
If you use Linux you know where it comes from, the same with Apple or Windows. You can try to influence the Linux-people like you could try to influence MS or Apple.
Flamimg because it didn’t work for you in your situation is not a sign of maturity but more a sign that you probably did a few things that broke your system.
This situation is most likely in Windows, then Linux and then Apple (my experience supporting these platforms). If you expect Linux to be Windows or Apple, well, too bad, it is not. Each has its own weaknesses and its own strengths like with every OS.
The only thing different with Linux is that it is a community-project. Flaming at it, like at Apple or MS, seems a bit weird, using this site to get support for the developers of X would be a more elegant way.
Watch your generalizations and stereotypes, Thom. They are almost guaranteed to be wrong. Try to stick to speaking of individuals, or groups which are homogeneous enough that your generalizations don’t fall apart when scrutinized, even cursorily.
“Watch your generalizations and stereotypes, Thom. They are almost guaranteed to be wrong.”
Actually, stereotypes and generalizations often prove to have varying measures of truth in them that apply to a visible portion of the targeted group at a level disproportionate to that of the general population. How much truth a particular generalization or stereotype contains usually depends on the observation skills and insight level of the speaker, not the fact that the statement is a generalization or stereotype.
That last sentence could be more accurately rendered such:
“They are almost guaranteed to have exceptions.”
So long as one is willing to expect and allow for these exceptions, there is absolutely no danger in characterizing a group with a general statement. This is especially true when the members of said group are members because they self-identify with that group.
Generalizations can be useful in gaining an understanding of group-behavior. Without generalizations, you may as well take the torch to the entire field of study called Sociology, since in many ways it rests upon the forming of generalizations about people. In fact, many of the social sciences “suffer” from this “limitation.”
Please don’t fear generalizations and stereotypes simply because some people use them as a crutch with which to support their irrational or hateful beliefs. This would be throwing out the baby with the bath water.
—
To speak particularly to this issue, I agree with Thom that the sentiments that he describes are exhibited visibly by a good deal of people that would consider themselves “Open Source” or “Free Software” advocates. By visibly, I mean that this occurs enough to warrant a useful generalization. In my many years of lurking on computer related message boards, I’ve lost respect for many a poster as it became clear that they liked to have it both ways on this point.
Perhaps it would help to put it in this way….
If I was served a free meal in a humble spirit with no claims made with respect towards the quality of the food, I would have nothing to say but “Thank you.” This would be the case even if the meal was terrible. However, if the person serving me the meal also insisted on extolling the virtues of his cooking methods and the quality of his food as compared to that of others, I would not consider it rude to counter these statements based on my experience. And if he then complained that my opinions weren’t fair because the meal was free, not only would his complaint fall on deaf ears, but he would lose my respect.
Stunning that after so many years people do not seem to grasp the point.
FOSS is about “free as in speech, not as in beer”.
Here is another one having troubles with his reading skills.
That wasn’t his point. His point, at the end, was that you/we/the Linux-using community cannot claim that our software-development system works better than other models, while then also claiming that you can’t point out that (some of) the results of our efforts are inferior because we’re just volunteers. You can’t have it both ways.
I value freedom a lot; I don’t like OS/software vendors telling me what I can and can’t do with my computer. That’s why I’m a Linux user. But that doesn’t mean that I’m going to pretend that Linux (-based operating systems) don’t have some serious, serious weaknesses.
If you insist on being snotty it helps your credibility not to let the point whiz over your head on its way to the ears of the people laughing at your naive misreading of a simple example.
And yes, I am quite aware of the distinction between gratis and libre. The example stands intact in either case.
That is good to hear, now try to grasp the meaning of the word “community”.
Maybe, just maybe, you should try to join one, but I guess it is too much to ask. You need social behaviour for that.
I happen to be a member of a few groups of people who tend to be stereotyped. And it has been my observation that the minority of people in the group who serve as the basis for the stereotype are either much more vocal or much more obvious to casual observers, than the rest of us. For example, people I interact with in my daily life likely don’t happen to know that I am gay, but they most certainly notice the 7 foot drag queens in stiletto heels, big hair, and value size cans of Aquanet in hand, when our local Pride parade is on the evening news.
The silent majority ends up living with the stereotypes generated by the more salient members of the group. It is my firm belief, based upon what I have observed, that the Linux community has a silent, or at least relatively quiet, majority to which Thom’s generalizations likely do not apply.
Edited 2009-08-21 14:46 UTC
The silent majority ends up living with the stereotypes generated by the more salient members of the group.
This is not the case if they distinguish themselves through their own behavior. They will be able to earn the respect of anyone that is not thinking blindly. They will be considered one of the exceptions, while not making the generalization any less true or useful.
It is my firm belief that there is nothing wrong with this. Generalizations can be useful in commentary and this is not any less so just because some people use generalizations to support their own hatreds. It is important to remember that the hatreds would still exist without the generalizations – they would simply find another avenue of expression.
]
How can you call the behavior of 10% “the rule” and the behavior of 90% “the exception”? It makes no sense. Are you sure you are not buying into some stereotypes a bit too much yourself. Because your quoted statement is a perfect example of *the problem* with stereotyping.
How can you call the behavior of 10% “the rule” and the behavior of 90% “the exception”?
I consider it important to properly define the group being generalized. When this is done, I don’t agree that the 10/90 split you describe will be in force.
In the case being discussed, the group that I would define as described by Thom is “Open Source Advocates on the Internet” rather then “Open Source Contributers.” And I believe that the generalization stands. The fact that some do not fit the bill makes this no less so. Nor does the fact that those that fit into the second category may be getting a bad name from the most visible segement of those that fit into the first.
I must reiterate that the willingness ascend from an initial generalization when the case of a specific person is at issue is what make this all OK. But it is foolish to suggest that I cannot form expectations based on generalizations and then whittle down to the facts of the individual case. In my experience, the initial generalizations usually hold water after the final examination about 75% of the time.
But again, the grouping is important. To use your example, I would consider homosexuals that are discreet to be in a different group entirely from flamboyant homosexuals, and I do not hold the former responsible for the actions of the latter, nor do I lump them in together in my mind. Yet I can still make generalizations about both groups that tend to hold true. FYI I view heterosexuals through the same filter, and can’t abide people that insist on reporting to me on their sexual activities and proclivities, a subject in which I have no interest. That does not mean that I lump all heterosexuals into one group defined by the loudest members.
If you really stop to think about the subject, I think that you will find that human beings would be more or less immobile if we weren’t able to make generalizations and even indulge in assumptions. These are what allow us to move through life without getting bogged down with every single detail of every single problem or situation. For me, the metric of a reasonable man is not the absence of these cognitive filters, but rather the ability and willingness to apply attention to detail where and when it is warranted.
Let’s look at OSNews, where people are more enthusiastic about OS advocacy than in most places on the Internet.
From http://www.osnews.com/statistics , it is easy enough to derive that OSNews has 29988 registered users. Let’s say that just 10% of them are Linux advocates. I’d like to see your list of more than 1500 OSNews posters who make these claims. Because That is what you would need to support the assertion that those who make these claims are in the majority. So please post your list. I’ve very interested to see who the 1500+ advocates making these claims actually are.
BTW, if you would like to restrict the scope to only those OSNews registered users who actually post, your target number is 1220. If you only want to support your assertion that my 10% guestimate is too low, you’ll need at least 244.
Now do you see what I mean about the silent majority? And do you see how far your own stereotype has led you astray?
Edited 2009-08-22 20:44 UTC
I don’t have the time to do such a detailed analysis. So I will do the honorable thing and withdraw my support of the claim on the basis of lack of evidence.
That cuts both ways, however. I consider the claim untested unless you provide the same measure of evidence to the refute the claim.
I will replace it, however, with the assertion that the vocal minority, majority or whatever you wish to call it, is creating this impression in the minds of at least some readers. As someone that wishes all of the success in the world to both Open Source software and Free Software, I detest the deplorable impression that these sophists are making.
And also, you know that no matter how much time you spent, you’d never come close to the numbers needed to justify the stereotype.
You have been claiming that most Linux advocates are making certain assertions. My position has been that your claim is unsupported by the evidence. The burden of proof has always been upon you.
Which is exactly the point I’ve been trying to get across. And I think I could count the major offenders here on OSNews on the fingers of one hand. Maybe two hands if I reviewed the history. But these people post so very often and so very loudly that one could easily *think* they were really speaking for the community. (And I don’t fault anyone for doing so.)
I scold the offenders often. But they never ever listen. Because they think they are doing so much good for the OSS cause. Or I should probably say the “Free Software” cause, since these folks tend also to advocate creating a schism in the community, placing themselves on a “side” labeled “Free Software”.
IMO, Linux has done well to get where is has despite their “help”.
Edit:
Now that that is settled, I should probably weigh in on the original question of whether OSS results in better code. In general, I think that the openness, and public peer review is beneficial. But when it comes to code quality and design quality, there is no such thing as a magic bullet. Closed code can be good, and open code can be bad. Large, well known projects are probably in the best position to benefit.
Edited 2009-08-23 07:24 UTC
As far as I know kde4 is fairly independent of X.
Due to Qt it can run on windows and on osx. In Qt 4.3 they introduced pluggable graphics backends. The raster backend (many people complain that this breaks network transparency) does some things much faster than the X11 backend. Notably Qt 4.3 runs much faster on windows and osx than on X11.
Don’t know about gtk but if any replacement to X comes along Qt should be fairly ready for it. More so if the OpenGL backend becomes viable
Qt’s raster back-end runs faster on X as well in most cases. The major drawback is increased memory consumption, as each application has to maintain it’s own glyph cache for drawing fonts, rather than relying on the X server to do it for them.
Oh, and it completely kills network performance, of course.
Obviously the ideal would be to use OpenGL for everything. Qt’s OpenGL back-end beats the raster engine (often by a wide margin), but comes with it’s own share of problems.
It doesn’t have to be that way… you could put the glyph cache in a central process and share the memory containing the rendered glyphs with other processes. In Windows the glyph cache is one of those things that’s kept in the kernel session space.
That thought occurred to me half-way through the podcast too. Most applications nowadays don’t use X directly, but sit on top of one of Qt, GTK or maybe wxWidgets. So, if we (the Linux-using community) where to scrap X all-together for something radically new and different, then it’s possible that most of the changes would actually be isolated in GTK, Qt and wxWin, and that most application code wouldn’t need to be changed at all.
converting from using libX11 to libxcb is a logical first step.
I don’t understand why there’s so much whining about ‘X’. The protocol isn’t bad, it has a generally good security policy and works quite well for general programming.
I believe that the nearish future will hopefully result in a change where proprietary hardware accelerated graphics start to go away to be replaced with better integrated more generalized (hopefully open) processing units. X excels in this generalized environment.
Well … I kind of liked this episode. Especially the “X” part was better than the original article. Unfortunately the attitude and maturity of the discussion about the X’s issues has been bad from the beginning.
What I’d like to see in OSNews is an interview with a X.org developers and give them change to speak and tell us the facts. Just hearing “X design sucks” / “Intel drivers are crap” etc. from a user’s mouth are not too constructive or interesting.
Edited 2009-08-20 12:20 UTC
Amen. TBH I don’t really know that much about the insides of X.org and how it really works. Lots of people come along saying that X sucks and it needs to be rewritten, but then they don’t really provide any decent proof of this, they tend to just whinge a bit about the drivers (ditching X would not solve driver problems I think!) and not really back up their claims.
On the other hand though I’m not sure that I believe that all is well in the world of X.org, and that things mightn’t be improved with a revolutionary new system. X.org might well have a crufty and rubbish code base worthy only of deletion, but how would I know? I would really like to see an involved discussion take place about X.org making the cases on both sides in enough detail that I can actually establish a strong, informed position either way. Neither side in the X debate really seems to be able to make a clear argument, or not from where I’m standing, anyway.
I’m still annoyed that Thom wrote his original article out of anger, destroying any chance of a decent discussion instead of a degenerate flamewar.
Since Christmas I have got rather annoyed at the way my Linux (intel-based) graphics stack has been treated. The whole thing ground to a halt with me waiting eagerly for the next update to something (kernel xorg mesa) to see if that helped. Even today some things don’t work.
I have some programming ability but no idea how X works or how to improve.
The question is that for an open source project, how can I help to solve these problems, aside from reporting what everyone already knows.
I listened to your podcast. Apparently you’re shocked that people think you’re a linux newb.
Well I think you’re a linux newb and a lousy expert, a lousy journalist and not much more than big mouthed pundit.
I don’t actually use linux on the desktop much these days, but I know what you mean about X being pretty crusty. It shows it’s age when stacked underneath all the fanciness of Gnome and KDE.
But some of your comments were deluded fanciful rubbish.
There is no similarity between X and Windows 98. Maybe Windows XP, which usually just crash completely on the odd occasion a driver misbehaved. Even vista apparently crashed a lot because of nvidia drivers. But apparently those crashes don’t count because what X is like Windows 98 and crashes about 5 times a day.
I think you guys should stop being sensationalist whores and actually do some research into X. You made vague conjectures about how X contains a lot of cruft, why didn’t you research it and enlighten us. You complained a lot a pulse audio, but all I heard was it doesn’t work for me, so obviously it’s completely broken. No insight into what’s really wrong with it, or really wrong with audio under linux.
Instead it’s all grandiose statements and exagerrated bullshit.
My personal 2c I think Linus is to blame for a lot of problems in the linux world. Without a good stable abi (or at least) api in the kernel it’s too hard to write software that works around a bad driver, and it’s nigh impossible to write a good driver. Yet you praise him for kicking broken things to the kerb. He’s very definite about the kernel and he’s not cooperating with anyone who wants to make linux better by starting at the bottom with a nice stable base. He thinks you can just recompile the whole stack all the time, but he can’t even prevent serious regressions in just the kernel.
The strength of Apple, MS, BeOS etc is that they can get hardware support and the software that goes on top of it working nicely. Linux can’t.
So instead of you saying Windows is awesome because it can does’t crash anymore, perhaps you should actually do some journalism and find out from the Xorg devs why X is moving so slowly. Find out why they don’t just dump it all and write a new desktop from scratch.
I guess it’s just easier to deliberately start flamewars on the internet.
So tell me. Why aren’t tech bloggers getting together and actually finding out stuff to put on their blog. Why is the information such low quality and basically a FOX news pastiche of personal editorial, and breathlessly shocked anecdotes presented as news.
Juxtaposition strikes again.
The comment about Windows 98 was simply that resizing a window causing a complete crash was akin to the sort of behavior expected in Windows 98—nothing *technical* was meant by that. You’re reading way too much into it
No—but that when Windows crashes, it just resets the graphics driver and carries on fine. Linux doesn’t. If X goes down you have to SSH into it to get it to come back up. Clearly in your pent-up anger to come to the defense of Linux you didn’t listen to the podcast too well.
Well for a start I wasn’t defending Linux, or X. I don’t believe either are particularly bright spots in the IT landscape. One is a better version of a 40 year old OS, and the other is a hardly improved 20+ year old desktop (from a certain point of view)
I wasn’t aware when X crashed that you needed to use SSH. It never needed that when I used it. I guess this is what you would could call progress.
I think allowing for driver crashes is actually besides the point. We basically expect software not to crash. Obviouly you expect the windows kernel to never (or very rarely) crash. The Linux kernel achieved that years ago. (X hasn’t of course, but then so little software ever does.)
And FYI I’m just posting my opinion of the podcast, which is that you are wasting your time by making it and your readers time by posting it.
It didn’t contain a single thought that added to the original articles, and it contained a lot of hyperbole that makes even the original articles seem like pure flame bait.
If other people read this, they’ll know to skip the audio altogether.
Let me ask you a few very simple questions.
Does X, or does it not, take down all the application an data within them when it crashes?
Does X, or does it not, crash when a video driver misbehaves?
You and I both know that the answer to both questions is “yes”. Since video drivers are a common point of failure (including the open source ones from Intel), X needs to mitigate these crashes in such a way that they not take down applications, causing data loss.
That’s all the article and this podcast explain. Nothing more, nothing less.
Yet it’s not some archaic 90s crap OS either.
Windows XP did the same, and for the most part I suspect Vista did it as well.
I’m not taking anything away from MS with getting it to work in Windows 7. It’s a real achievement to get it working. Prior to that the kernel required the driver to gracefully exit which still left you with a 16 colour 640×480 desktop that often required a reboot to fix.
I agree X is dated, but by your measure so is OSX, so is Vista which crashed a lot due to bad drivers.
It’s not beyond imagination that the reason XVideo died on you is because it was trying to work around a bug in the video driver.
I just listened to an hour of hyperbole and editorial that sounded like you defending the purpose of your article, which was to call X crap (which is fair enough)
But why is it crap, why is it falling even further behind. Why isn’t it the cutting edge software we want. Why hasn’t anyone done an apple and moved on from it.
Apparently you don’t know, and apparently you didn’t ask anyone to find out.
… snip …
I think the discussions have demonstrated that your readership is asking for more – details of why this is the situation and what could be done about it? Your article is on a popular topic, we’ve all seen examples of X being poor and X being awesome, though sometimes one or the other experience dominates.
People are interested in the details of this stuff but your article only gave them a very high-level taste of the issues involved. Now they’re hungry because the original article didn’t satisfy that.
More X articles please, with more technical depth!
Let’s fist re-iterate that the article was VERY well received. It’s now one of the top-most recommended articles on OSNews:
http://www.osnews.com/statistics
But yes, of course more depth would be welcome, and I’m thinking about how to do that properly. However, that was not the point. Sometimes, it does take a blunt hatchet to get a discussion going, and not a scalpel. The blunt hatchet came down, and insane amount of relatively good discussion came out of it. Now, it’s time for me to take the scalpel, and take a closer look at the issue.
It’s still a fact that X will die if a graphics driver crashes, and that it will take everything with it. We can all agree that that’s a Very Bad Thing ™. I want to find out what it would take to combat this issue – I’d say the best way to find out how is to interview a long-time X developer.
I’m now trying to find out who.
Edited 2009-08-20 16:13 UTC
Sounds like you’ve got a good plan, I look forward to seeing where this goes.
And the corollary would be to also try to get a more exact handle on how Windows handles this, maybe there are lessons in the specifics too. Someone (or maybe more than one?) posted on the original article saying that Windows does is allow a device driver to upgraded, or to restart itself on discovering an error condition (which is trivial if you can do online upgrades anyhow, assuming the driver authors take advantage of it).
I would expect implementing replacement-with-driver-co-operation that to be significantly more straightforward than implementing full sandboxing / fault isolation of unco-operative drivers, so it’s worth knowing what they’ve done.
I’d recommend you talk to either Dave Airlie – airlied < AT > gmail.com or Adam Jackson – ajax < AT > redhat.com
Both are well known respected X hackers that have been in the game for many years and are also known to hold quite entertaining speeches.
Good luck!
Just FYI, to see how some of this support works in windows, look at http://www.microsoft.com/whdc/device/display/wddm_timeout.mspx. It’s unfortunately not super detailed.
How about Kristian Høgsberg? He is working on an alternative graphic stack (Wayland) for Linux AND at the same time works professionally with x11/x.org for red hat.
I wouldn’t be suprised if he would like to present what problems exist with x11 from his viewpoint and the advantages of wayland over xserver.
http://groups.google.com/group/wayland-display-server
I understand the point you’re making and that it is usually true. I just thought I’d point out that programs can be written to utilise a back-end which can easily run independently of a graphical front-end. Hence, important data can be saved in case of an X crash.
I don’t have time to listen to them. I’m sure you are all very insightful and have very important things to say, but the medium does not lend itself to casual browsing. Transcripts would be nice.
Podcasts are not news. You can miss podcasts and you wouldn’t honestly miss a thing; it’s just discussion about the Xorg article last week which already has 355 diverse comments that covers more ground that just me and Thom can discuss.
Podcasts lend themselves to being listened too on journeys, or as background noise whilst doing other tasks, such as coding or cooking (as disparate as those two are).
I think you mean “trips” instead of “journeys”.
British.
I think they both work in US English. Or maybe that’s just a remnant from my highschool days of writing all my papers with a British Word processor.
I sorta loved how my “English” teacher kept marking up my British spellings as “incorrect”.
They are, just like American spellings would be incorrect in England. And some British spellings are etymologically incorrect, while the American ones are not (e.g., ‘-ize’ vs. ‘-ise’).
English spellings of English words are never wrong, in my book. If anything, I should have been given extra credit for my sophistication.
Really? Would you like to explain what is inherently better about British spellings? Is “colour” really closer to the actual pronunciation of the word than the American “color”? Is the meaningless inversion in “centre” intrinsically and cosmically better than “center” (neither of which accurately reflect the actual pronunciation)?
As far as I’m concerned, you haven’t gained sophistication, just a sort of annoying pretentiousness.
Because they invented the language.
I think in print you’re missing the playful sense of absurdity. Its the English language. English. English spelling. Get it?
Its ironic.
If it were called the American language, then I would be more than fine with American spellings. And being marked down for using English ones wouldn’t be funny, it would be an American language teacher’s patriotic duty to eradicate the language of the tyrannical monarchy.
On a scale of 1 being dead serious and 10 being a part of a Lewis Carol novel. I’m close to a 9 with this, but it is important. Vitally important!!
Um, they invented the language?
So did we! Americans and the British come from the same group of people. We have continued developing the language just as the British have. It’s a bifurcation, not an independent copy.
I stopped listening to this shortly after they starting blaming the OS (well X) for a driver being able to crash the OS. If you want a OS that is resistant to bad drivers, you want a micro kernel OS, but that means you are going to take a speed hit, on your graphics drivers of all things. Which is why all of the main stream OSs can be taken out by crappy graphics drivers.
Yes X shouldn’t be able to take out the system, and to be fair if it does crash, normally it doesn’t, but if it does, chances are it’s a graphics driver, and only those with the source for the graphics driver can fix that. All the driver stuff is being taken out of X and put into the kernel, then X will be run like any app, but a crappy driver will still be able to take out the OS and it will still only be fixable by those with the source for the graphics driver.
Yet Windows Vista and 7 can survive a graphics card driver crash without taking the system down—the graphics stack just reinitialises and no apps are lost. Microsoft built the right code to do it. Thom is saying that if Microsoft can do it, why can’t X?
I can’t see how it can cope with every crash a bad graphics driver can cause. X should never be responsible for drivers anyway. The graphics drivers are being moved out of X into the kernel, at least that’s what’s happening with the open drivers. The kernel should cope as best it can with any driver crash. It’s not X’s place to worry about driver crashes. Ideally, X shouldn’t even know a driver has crashed and been restarted.
Not crash the system…crash X which takes along any gui applications based on toolkits that depend on X. For most users that is equivalent to the same thing. Even if you ssh into your box any open document that you were using in a graphical environment.
Having the source does not change the fact that a VERY limited subset of developers can fix this due to X being X and having a massive shortage of developers.
oh yeah, BTW, how many people working on intel drivers are not employed by intel or work on X in their free time?
Edited 2009-08-20 18:40 UTC
Ok, yes crash X. But that’s because X is doing stuff it shouldn’t be. It’s doing things the kernel should. Making it more complex then it should be. The graphics drivers are being moved out of X into the kernel, at least that’s what’s happening with the open drivers. The kernel should cope as best it can with any driver crash. It’s not X’s place to worry about driver crashes. Ideally, X shouldn’t even know a driver has crashed and been restarted. As all this happens, X will shrink, and that will make there less to work on and make it all much more maintainable. We don’t want to build up X, but strip it down. There is more X implimentations than Xorg, X doesn’t have to be the monster Xorg is, but Xorg is such a monster because it’s doing so much, i.e. drivers, then other X implimentations.
AFAIK only mode setting and the memory managers are going to be in the kernel, everything remains in user space.
Granted mode setting is probably an area that is hugely problematic for x atm and it’s good to see it going where it belongs.
So a graphics driver still could potentially crash x. Also it does not mean that it has to be a graphics driver that crashes X even if it is the most common reason.
“It isn’t x’s job to worry about whether the driver crashed or restarted”. THAT is the whole point, it shouldn’t worry about it and it shouldn’t take the graphical system down either. We are not talking about it maintaining display, but the apps that are running shouldn’t be taken down even if you can’t see them whilst the gfx driver restarts.
Edited 2009-08-21 11:09 UTC
Then I must be misunderstanding something because I thought the reason for DRM, KMS, GEM and Gallium3D was to remove the complexity of have drivers in Xorg. Check out xf86-video-modesetting
kms was to remove mode setting from X and as a result enable linux to have blue screens of death which are not possible atm if X crashes since X is doing mode setting instead of the kernel.
DRI2 is what should have been done before aiglx even came onto the scene, because dri was a hideous workaround. It depends on a memory manager such as gem or ttm
GEM is to allow memory migration in modern graphics cards and to allow virtual graphics memory. A memory manager has been in the unix nvidia drivers since forever and is only now coming to open drivers. It is the reason why nvidia driver can have direct rendering in compositing managers instead of aiglx
Gallium3d was designed to consolidate graphics driver functionality in order to make writing and maintaining graphics drivers smaller, easier and faster.
There are features that X really sucks at and that have been in things like windows xp:
-Decent monitor configuration
-Screen rotation
-Proper multi monitor support
-Error screens (BSODS)
Things that would be nice to have:
-multiple monitors with composting
-no video tearing when running a compositing manager
-input redirection (map inputs to compositing manager changes)
– support for switching gfx cards
things that vista does which X does not (and thoms main complaint)
-when a gfx driver crashes, it does not take down apps with it.
If I’m running a word processor, X crashes. Why should I lose my document? Why is the ability of an app to run dependent on the fact that it is displayed or not? With kms it ought to be possible to restart X and be able to access the application.
It may not be possible with X but it certainly is with Vista’s display manager. It is not perfect but I recall that I had a gfx card with faulty memory…I did not get BSODs only little notifications that the graphics driver crashed.
And pretty booting, and TTY switching, and design/stability, etc etc.
XRandR
Agreed, but with modesetting you should be able to have this.
Yes, but it also abstracts, allowing:
“Another thing that didn’t get a lot of attention is Alan’s xf86-video-modesetting driver. It’s a dummy X11 driver that uses DRM for modesetting and Gallium3D for graphics acceleration. Because of that it’s hardware independent, meaning that all hardware that has a DRM driver and Gallium3D driver automatically works and is accelerated under X11. Very neat stuff.”
http://zrusin.blogspot.com/2009/02/latest-changes.html
Sounds good to me, the X driver, isn’t a “real” driver.
I think that’s XRandR again, but that’s not how I’m setup.
Never seen that, all worked fine for me.
That’s been in for a while:
http://www.youtube.com/watch?v=BrK4c7iFJLs
http://www.youtube.com/watch?v=E3FaMMTe5Ak
Not sure what you mean, but I can’t think of anything you could mean that can’t be done.
Again, agreed, but I think the best solution for this is in progress.
If I’m running a word processor, X crashes. Why should I lose my document? Why is the ability of an app to run dependent on the fact that it is displayed or not? With kms it ought to be possible to restart X and be able to access the application.
Agreed, and when X crashes, which hopefully will be less when it’s less bloated as stuff is moved into the kernel, it should do so smartly and restore the session. Ideally though, these crashes won’t be caused by a graphics card crashing and being restarted, X won’t even know about that anymore.
No doubt, but I’ve also seen a computer where both Windows and Linux crashed out while booting because of the graphics card being buggered. Only so much that can be done. I can only guess who Xorg would cope with the your gfx card. But then of course we would be testing that one problem. Anyway, as I keep saying, I don’t want X to be doing this stuff. It’s almost like firefox having your adsl drivers.
Xrandr exists but it does not work nearly as well as equivalents from other desktops. Part of the problem here is nvidias refusal to do anything with xrandr beyond 1.1 whilst x is currently on 1.3.
Video tearing occurs because X is asynchronous (has plenty of advantages but few of these apply to the local desktop), this means that videos are not synchronised against the refresh rate of the screen hence you get tearing.
You can turn on vsync on the compositing manager but that slows everything else down too to 60 frames/sec. Effects all start becoming sluggish and jumpy. I have no idea how vista and osx manage to keep effects smooth and prevent texture tearing but I guess it is partly to do with the fact that their display managers are synchronous.
I don’t think those input redirection patches ever got merged…
Looks good to me, but I’m quite happy working on the command line, put on guake a while ago and F12 for command line is a old reflex from RiscOS days for me. 🙂
But anyway, Nvidia, and closed drivers are the problem here. I know it’s a old drum, but it is valid. Can’t wait for Nouveau to be mature enough for me. It’s not like my hardware is moving on. 😉
Well I’ve never seen it, and video is probably the main use of my desktop. Got two sound cards, one for the TV/screen one for the monitor. Some times my wife watches something on the TV/screen and I watch something else on the monitor. It all works just fine, even on my aging flogged to death hardware. Can even surf at the same time! The old (waiting for removal) XP install can not do that, at least not as smoothly, at least not with the same software (vlc/mplayer/firefox).
Nope, they are in, along with multi-pointer.
video tearing is most visible on high def material with lots of vertical or horizontal pans where a lot of the screen is being redrawn.
Amusingly I changed to vdpau renderer and the effect is reduced, but on xv it is absolutely hideous.
oh yea, if you watch things on full screen most compositing managers unmap the windows in order to improve speed and to remove tearing.
Edited 2009-08-21 16:56 UTC
Haven’t gone HD, files to big, balance of quality to size isn’t for me right now, plus the TV is only PAL widescreen. It’s not something I’ve heard of, doesn’t strike me as a massive issue, not compared with getting Nouveau and gallium3d complete.
I have seen a lot of X crashes in my time. A lot. And I’ve only been using Linux heavily for about 6 years, give or take. I’ve had installations, sometimes of main-stream distros, where the X server never worked right in the first place, even after days and days of coaxing. I’ve had X server crashes cause kernel hangs. I’ve had installations where trying to VT-switch caused a damned kernel panic. They’re not exagerating, at all: the X-server is Windows-98-like in terms of reliability, its frequent foul-ups can cause significant data-loss, its failure can render a machine unusable, and the thing is a serious, serious weak-spot in for most Linux OS’s.
Edited 2009-08-20 21:44 UTC
I have only had a few X crashes myself. The closed-source ATi drivers have mostly been at fault, though the open-source nouveau driver has also caused some. Nowadays it’s been rock solid though, haven’t touched the ATi drivers anymore
But yeah, X does seem somewhat crash-prone. If they can’t fix the crashiness then atleast they should fix it taking all apps with it.
The key to this is the graphics drivers.
Read up on Gallium3D DRM, KMS and GEM. Then xf86-video-modesetting and maybe (though it’s not really X) Wayland. The future is happening. 🙂
In the 3 years I’ve been moving my head into OSs and thus found Linux/Unix, I’ve not had X crash in any of the distros. But I still think there is a problem, but I know it’s being worked on. The aim game of X development at the moment is to move drivers out of X and into the kernel where they belong. The switching kernel panic is a classic example of why this move is so important. It’s why modesetting in 2.6.29 got everyone so excited, it’s a big move in the direction required. It’s just a issue binary graphics driver people aren’t keeping up and dragging development of this important change. Move all the driver stuff, and strip X right down to something maintainable and able to be run as the normal user not root. Can’t wait. X shouldn’t even know a driver has crashed, the kernel should deal with it as best it can. It’s not X’s place to handle driver crashes.
Thom,
The reason you don’t see Maemo devices anywhere is that they haven’t marketed them widely yet (because they have indeed been geek things, without phone functionality).
Future Maemo devices will be phones, and in direct competition with high end smartphones – and, competing in your usual iphone/symbian/android market. The competition just hasn’t started yet.
I’ve been watching this exchange with interest. I’m a power user with a high level understanding of what’s involved in development, nothing more. Still, here is my unsolicited opinion.
The point of the original article is valid, as are Thom’s criticisms of the likes of Apple (though i must say the X article did read a bit like a rant/flamebait). Also, the point that the way the open source community reacts does appear that it likes to have its cake and eat it can sometimes also be true.
BUT, a company like Apple has an ethos of delivering what it perceives *its* customers want, and by all reasonable metrics they are right. The price for that is the nannying that many of its more techno-savy customers don’t like (who I suspect are a minority). Its software “just works” for the vast majority of its user-base. see Apple results for iPhone, iPod, Laptops, satisfaction survey’s, etc.
Linux gives you unrivaled flexibility and access to the OS. The price you pay is that not all the effort is focused on areas where you (Thom or otherwise) believe it should be.
You complain about Apple for not being open enough and you complain at the Open Source community (or at least a part of it) for not making software the way you expect it to work. The words; cake, having, eat, it, springs to mind yet again.
Still, there were some useful insights in the comments section to the original article. Flamebait or not, I gained some more understanding from the ensuing comments.
OpenBSD centric …
http://www.openbsd.org/papers/bsdcan08-xorg.pdf
http://www.openbsd.org/papers/fosdem08-xorg.pdf
These two presentations show where X is going (it is about a year old) and things are going to improve, much of what you discusses is what has planned removal of legacy code and running it as an application.
The complete recovery that Windows 7 is capable of is indeed impressive.
I generally liked the treatment of the X issue in the podcast much better than the editorial, I thought the real points you were making came across much better.
Two quibbles I had:
1) Agreed that the source of Thom’s bug was not directly relevant to the point at hand. However, it is a somewhat related issue since whether it’s a bug in the core X server or a bug in the driver or even a bug in a kernel module. This is important since it significantly affects the architecture that would have been required to avoid the fault. So no, Thom’s problem isn’t the issue here but it will become relevant to consider what *kinds* of issues we think are most important to contain.
2) The age of X: yes, it’s an old design. But so, essentially, is the Unix / Linux kernel in that the kernel is obliged to support some pretty weird and archaic abstractions. Being an old design doesn’t inherently make it unviable or impossible to continue building upon. Even though Linux chucks out old unwanted code all the time, it doesn’t remove *functionality* very often, nor does it generally change the userspace ABI. I don’t see that this has to be much different to supporting legacy protocol stuff in X. With sensible architecture it’s often possible to reasonably cleanly cope with legacy stuff whilst building a modern stack.
The major difference here is that the linux kernel evolves with the hardware. Subsystems are replaced with newer more flexible systems or rewritten to suit newer hardware.
X on the other hand just seems to have pieces grafted or pieces excised. Subsystems don’t (didn’t) get redesigned to deal with the evolution of hardware, be that input devices or graphics cards.
I suppose that wayland is probably the way to go…but it isn’t a serious project (only a side/investigation project) and has very few contributors.
Well, that would be incorrect. X does have redesign: XI2, DRI2, Composite (to replace backing store), Gallium3d, EXA to replace XAA but perform the same function. EXA, in particular, has seen a lot of rework during its existence. Migration has been fixed, additional acceleration hooks have been added, glyph caching has been modified, etc.
They have also removed a huge amount of old cruft from the X server.
And I’m not really sure what the critical difference is between upgrading/reworking a subsystem and replacing one implementation with another. Does not the Linux kernel do the same thing with some regularity (e.g., static dev -> devfs -> udev, the ever-changing wifi stacks)?
ok,there is a lot of planned redesign, but a lot of that is recent and hardly in usage. Xi2 is planned for a xorg 7.5, dri2 is used by the intel driver which currently works badly. Gallium3d is not used yet in the mainstream. Exa…funny how as soon as it is relatively stable the main devs that designed exa have moved onto uxa because exa does not work well with modern intel cards, it took ages for exa performance to overtake xaa and as soon as it does something else comes along.
It seems to me that a lot of stuff just gets bolted on…Aiglx being the main example of a workaround. The fact that the raster engine (using the cpu, no acceleration) in Qt is better than X11 for many tasks (the only one that I know X11 is significantly better is at text) says a lot, at least in terms of graphics.
Works fine for me on open source Radeon.
It’s getting very close.
Only Intel is dicking around with UXA. Everyone else is still using EXA and several developers are still spending time debugging, optimizing and upgrading EXA. It works great for me on my Radeon.
Really? Seems like it was the right way to go for the problem it was solving. It fixed indirect OpenGL, which was slow and broken for a long time. Now you have two good paths for accelerating OpenGL and they both use the same underlying infrastructure and they both make orthogonal sense (one accelerates via the X server, one on the client-side).
It says a lot about the suckiness of Qt’s performance, which I’ve railed about elsewhere. There are many applications, toolkit and libraries that make extensive use of XRender, OpenGL, etc. that are also fast, whereas Qt4 is slow slow slow. My driver accelerates a number of 2d operations. Qt doesn’t seem to take advantage of that. GTK+ does. Qt3 does. Qt4 doesn’t. I don’t know why and I’m honestly at the point of no longer caring (i.e., I’m giving up on them ever fixing the problem as each 4.x release is slower than the last even though my driver is getting faster and more stable at the same time).
gtk is slow too…Qt3 does not use xrender nearly as much as Qt4, it is much more primitive and as a result it isn’t surprising that it is much faster.
I find it strange that Qt can be fast on windows and osx and then fail so on linux. Are you saying that the devs are incompetent for failing to take advantage of the “great” underlying technology. Far more likely is that it is much more difficult to use X11 properly or that there is something preventing Qt from doing so…
There are documented cases of people stating that Xrender is amazing at rendering text but much less so for other cases. There is also the case where graphics driver don’t accelerate the whole of xrender for various reasons…
Edited 2009-08-21 10:32 UTC
Some of the recent stuff with kernel mode setting, DRI2, etc has seemed more like the evolution with the hardware that we ought to *expect* X.org to be doing. I think XInput 2 (and therefore multipointer X) are actually going into the next relase. I thought they’d also done more work on hotplugging displays but I don’t honestly know if that’s actually complete. Gallium 3D is another place where the pace seems to be picking up a bit.
I think even the X developers would admit that they’re a little late to the modern hardware party but they do seem to be trying to catch up! The question is, as you say, whether they’re just bolting stuff on indiscriminately or whether they’re considering the internal architecture of their code…
Nice thing is that most of this work is used by Wayland too (and probably could be useful to other alternatives) so it’s not like all our eggs are going into the Xorg basket – looks to me like the updates to the architecture of Xorg is making it easier to develop alternatives.
Anyhow, all I really wanted to point out is that old software doesn’t necessarily have to be crufty or impossible to evolve.
I though it was a very good podcast. It isn’t easy to listen when someone points out the warts in your favorite OS. Thom and Kroc did it on a way that was reasoned and, quite frankly, correct. Reason is something woefully in short supply on the American side of the pond of late.
I also appreciate the fact that they kept their criticisms to X and only X. These kind of things have a tendency to dovetail into window managers and such. Even though Thom and Kroc freely admit that they are not Linux experts they know enough to tell the difference between Xorg and the stuff on top. Although I wish one of them would have mentioned how utterly stupid it is that Ctrl-Alt-Backspace is now disabled by default.
Lastly, I feel bad for Thom and Kroc because both of them have desktops that are absolutely unstable and pukes within 3 hours of heavy usage.
The article annoys a whole lot of people. Which is sad, because if Thom had been more tactful, less full of himself and actually know what he is talking about, we might have seen a real discussion.
1. The overall tone of the article is insulting
“and immediately I was reminded of why I do not do any serious work on Linux: the train wreck that is X.org.”
“And here we see why the X.org stack is a steaming pile of dog poo”
It is hard to expect a rational and meaningful discussion after you use this kind of language.
2. Putting words into people’s mouth
Don’t that. Ever. Especially not on an editorial.
That is, if you want people to take you seriously, they you shouldnt do it.
By that I mean the conjecture you made repeatedly such as :
– Oh the main reason people are mad is simply because I said Linux had some flaws.
– People are mad because I said Microsoft is better
While it might be true for -some people, it is certainly not for others. And generalizing all your critics into one big bucket like this is simply foolish. Of course people are gonna get even more riled up.
3. Passing yourself off as being knowledgeable when you are not
The article stated quite succinctly that the cause of the problem is X, more specifically, the bad design of X11. This implies that you -have- looked into the cause of the problem and have identified it.
As it turned out, you don’t know all that much about X, Linux programming or probably any non-trivial system programming. One thing that you should have mentioned in your article in a very clear way.
This is a problem because when people who are not familiar with OSNews Editors read your article, they are going to assume, quite reasonably, that an editor of a website called OSNews would be very informed about what he’s written.
You can certainly write about an unsatisfying experience when using ($insert_OS).
What you shouldn’t do is pretending you have identified a technical problem. The end result would be misinformed readers. It’s much better for the readers to read that there -is- a problem with using Linux as an end-user but the author doesn’t know nor truly care where that problem lies.
This is actually a pervasive attitude both in editorial and some podcasts.
I remember in one of the podcast, Kroc was lambasting FHS. Then it turned out that he couldn’t even figure out how to use basic commands like “locate”, “find” or “which” to pinpoint the firefox executable binary. Afterwards, he mentioned the last time he used linux was about 2 years ago and he had very little experience in it.
That is just unacceptable and why it still sticks out in my mind.
You may certainly mentioned what a horrible experience using Linux / *BSD operating system is.
But when someone is proposing a change to FHS (which doesnt have much to do with end user in the first place), then people expects you to have some expertise in the area.
—
These are my main problems with podcasts and articles published by OSNews.
The podcasts are great when they are talking about stuff they know something about. Kroc should talk more about web development, Thom should talk more about… err, I dunno, history of BeOS and Apple products maybe ?
I was ready to give up listening to the podcast when they released the 2+ hours long of talking about Apple. It was awesome and something they should do more often i.e. sticking to what they actually know.
—
S
Forgot to mention :
Thom’s reply to the response he’s gotten is a bit sleazy to me.
He’s switching his stance from saying X is bad and should be replaced into “the whole point of the article is to raise an issue so it would get discussed”.
The article was a rant, 10 paragraphs of complaint. Add the insulting tone to it and I dont know how Thom could even think people could discuss it calmly, rationally afterwards. More likely, he never had such an intention in the first place.
Your comment was a rant. As far as I can tell, Thom was reporting the sad situtation where X crashing(which is still fairly frequently) takes down with it all your apps that are open. It should just reload the video driver and carry on. No racket science there. No need for a technical degree in computerese.
I’ve just been listening and generally agree with your comments about X but, as i’ve previously said, wow!
You guys love yourselves. Do you own the X/Linux developers. Have you paid any money on them?
Comments such as that the dev community make you want to “wring their necks”? Up until then I was with you. General douchy comments about how Mac is bug-free (the non British guy challenging those statements was a relief) was irritating. However, when the mention Mac may use the same graphics/driver implementation as x+linux it isn’t seen as important.
Seriously guys, those devs owe you nothing. Whinging about what they’re doing in a immature manner isn’t what’s needed.
Additionally you state is only good for scientific implementations. How? Also, you appear to talk about a lot of things you are sure about or have no idea. Perhaps stick to things you do know?
Edited 2009-08-22 21:36 UTC
Thom (and Kroc),
I think much of the hostile reaction you are getting to this podcast is from the “one-eyed” Linux fringe — “There’s nothing wrong with Linux, its all perfect, if you have problems with it then go back to Windows where you came from!” That may be a bit harsh but you see this attitude so often on Linux support forums (Ubuntu is an exception I find).
Most of these negative folks were not listening to what you were saying, simple as that. You were not bowing before the Alter of Linus therefore you were flaming their sacred beliefs.
I would love to switch from Windows to Linux but every time I try it just gets in the road of using my PC (though to be fair, its getting much better).
One of my pet peeves is printing, getting my Canon laser printer to work is always hit-and-miss. Geeks will tell you that Canon printers are cr#p and you ought to get a HP LaserWriter 6 – that’s a twenty year old printer that you can’t buy any more. And the concept of buying a new printer just to be able to print is less than “user focused” in my view.
Another peeve is packet management. Apt and the Synaptic Packet Manager (Ubuntu) are ok but have any of the developers tried PC-BSD and its system? Now that really is user friendly, like Windows only better 🙂
To finish up; I really enjoy your podcasts, be as controversial as you like. I listen to the podcasts in bed of a night before going to sleep (as a 63 year-old male I have nothing better to do 🙁 ).
Regards,
Peter
With HPlib the newest of the HP-printers are supported, just installed two brand new ones HP officejet 8000.
As I cannot edit, I’l add it.
Critism is nothing bad, but if one critisizes, then he should also tell how it could be done better. It would even be better if one would help, which reflects something like community-spirit.
The flaming, done in the original article does not help anyone.
Another point to make, is that a lot of people just “consume” and are not willing to inform themselves and learn how to handle complex matters.
Handling a PC, no matter with what OS, requires that one learns how to handle it. Having done support for many years I know the way some people react: if a machine (PC) does not do what they expect it to do, they blame the ones that made it. People tend to be lazy, not taking the trouble to read a manual, inform themselves.
I have one PC with Linux running for 5 years now. And yes, X crashed on me once, 4 years ago. That X is no highlight may be true but it is functioning flawlessly for 4 years now, with video. It also works for loads of other people. I also have a PC with windows, also having had some crashes, also solved, because I read the manual, search in internet, solve the problem and hand back what I did, so someone else can take advantage of it.
Flaming is no solution, for no one.
Edited 2009-08-23 12:29 UTC
I don’t blame you for having the feelings you do – at the end of the day, you have a perfectly good and functioning printer, why should you need to go out and purchase another one just for the sake of compatibility? the problem is when there are issues it is a lot easier to blame the hardware company than accepting that it is a deficiency in the system and needs to be addressed.
I had the very same issue with my printer, Samsung ML-2010 printer; when it worked, it was buggy, and when it didn’t work – all hell would break loose. The problem I had were people reporting it to be a working and functioning printer when it certainly wasn’t anything close to it.
The problem is that with Linux and Xorg, you have a kernel which is GPL, if you link against it, you’re ‘contaminated’ – Linus has promised not to go after developers who create proprietary drivers but I simply don’t trust a situation where something is done on a verbal agreement. A court could easily turn in 5 years time when a new maintainer takes over and claim that the verbal agreement by Linus has no standing – hence the reason for the elaborate shim which Nvidia uses. The solution to that is for Linux to change to LGPL and given perfect lengal clarity to developers and then develop a stable driver ABI/API so that the likes of Nvidia aren’t constantly chasing a moving target. Get those fixed and you’ll find a good amount of the Xorg issues will disappear.
Of course it’s the fault of the hardware company. If they didn’t provide a functioning driver for windows, they, not windows, would be to blame as well.
The answer is to do some research before buying hardware, and not buy anything reported to have bad Linux support.
I think it’s clear to all the involved parties that the proprietary drivers are an interim solution; it could be compared to the “warez” situation, where software makers are willing to look elsewhere because it’s not in their interest to forbid consumers from using their software if they would not buy it anyway.
We could well survive with the setup where proprietary blobs just weren’t allowed, and users that need it would install an illegal version of the drivers (just like they are using illegal codecs to consume media currently).
Linux will probably never change to LGPL. It would need agreement from everyone that has contributed code, and that’s not going to happen. It’s the same mechanism that prevents the scenario where Linus would go insane and release a proprietary version of Linux. Linus just doesn’t own the copyright for all the code.
A lot has been written about this on old LKML threads. The solutions needed are pretty mundane technical issues, not deep philosophical problems about licensing etc.
“The solution to that is for Linux to change to LGPL and given perfect lengal clarity to developers and then develop a stable driver ABI/API so that the likes of Nvidia aren’t constantly chasing a moving target.”
Yikes!!!
Talk about THE most difficult solution to the problem. This will happen the day I go to work in my flying car.
Not that disagree with your idea but it is highly unlikely to happen.
Edited 2009-08-23 16:14 UTC
I’d just note that FreeBSD has, AFAIK, a stable driver ABI (at least within each major release). Don’t know about the other BSDs. It doesn’t seem to have attracted massive hardware manufacturer support to their platform, though obviously there are other factors in play as well. Obviously, they do have a good complement of open source drivers so it’s not like their hardware support is *bad*.
IIRC there are NVidia drivers for FreeBSD, or have been in the past. I’d be interested to know if the stable kernel ABI makes them a bit less painful to maintain over an install’s lifetime…
Although on Linux it’s got a lot since DKMS started getting shipped with my distro – the NVidia drivers get automatically recompiled when I boot with a new kernel even if I’m not using the packaged version of the drivers. Before that I used to have to do the recompile manually if I was unable to use a packaged version from my distro.