Linked by Thom Holwerda on Sat 15th Jan 2011 10:40 UTC
Mozilla & Gecko clones Yesterday, the ninth Firefox 4.0 beta was released. One of the major new features in Firefox 4.0 is hardware acceleration for anything from canvas drawing to video rendering. Sadly, this feature won't make its way to the Linux version of Firefox 4.0. The reason? X' drivers are "disastrously buggy". Update: Benoit Jacob informed my via email that there's some important nuance: hardware acceleration (OpenGL only) on Linux has been implemented, but due to bugs and issues, only one driver so far has been whitelisted (the proprietary NVIDIA driver).
Order by: Score:
Not exactly news
by AdamW on Sat 15th Jan 2011 11:06 UTC
AdamW
Member since:
2005-07-06

"He further requests help from Xorg developers and distributors on this issue, since they are still working on it for the future. In other words, if you happen to know people from those parts, be sure to let them know about the difficulties the Firefox team is apparently having with X. "

Please, please - don't bother. The fact that the OpenGL implementations in current X drivers for many cards are buggy is hardly news to anyone, least of all the developers. Inundating them with 'OMG WHERE'S MY FIREFOX ACCELERATION U SUCK!' messages is not going to help.

Reply Score: 8

RE: Not exactly news
by Thom_Holwerda on Sat 15th Jan 2011 11:23 UTC in reply to "Not exactly news"
Thom_Holwerda Member since:
2005-06-29

Sorry, I should've worded that better. Fixed it in the article.

Reply Score: 2

RE: Not exactly news
by somebody on Sat 15th Jan 2011 13:00 UTC in reply to "Not exactly news"
somebody Member since:
2005-07-07

nope, but moving to other browsers that perform decently will;)

i went for chrome long ago and will never return back to bloat called firefox

Reply Score: 2

RE[2]: Not exactly news
by Savior on Sat 15th Jan 2011 13:38 UTC in reply to "RE: Not exactly news"
Savior Member since:
2006-09-02

nope, but moving to other browsers that perform decently will;)

You can change browsers, but not the fact that drivers and X are buggy beyond belief. They won't just magically work for Chrome.

i went for chrome long ago and will never return back to bloat called firefox

Bloat as in... memory usage, for example? Then you better look back, because Firefox actually needs less.

Reply Score: 5

RE[3]: Not exactly news
by ndrw on Sat 15th Jan 2011 14:27 UTC in reply to "RE[2]: Not exactly news"
ndrw Member since:
2009-06-30

Well, there is a chance they will.

Linux OpenGL implementations are not very different from what we (used to?) have with html+css+... implementations. They are a buggy, inconsistent mess but if you know the safe path across the minefield you can still produce a working product. Sometimes the obvious path is not the "proper" one.

It's likely that Mozilla guys are performing some operations that don't match the semantics of underlying layers well (after all it's a multiplatform program). Such corner cases are more likely to have bugs or suffer from poor performance. This of course is not an excuse for guys producing these bugs but I can easily imagine another application doing the same things differently and managing to work these bugs around.

Reply Score: 4

RE[4]: Not exactly news
by jacquouille on Sat 15th Jan 2011 16:22 UTC in reply to "RE[3]: Not exactly news"
jacquouille Member since:
2006-01-02

Yep, indeed: with WebGL we are basically exposing 95% of the OpenGL API to random scripts from the Web. So even "innocuous" graphics driver bugs can suddenly become major security issues (e.g. leaking video memory to scripts would be a huge security flaw). Even a plain crash is considered a DOS vulnerability when scripts can trigger it at will. So yes, WebGL does put much stricter requirements on drivers than, say, video games or compiz.

Reply Score: 3

RE[5]: Not exactly news
by Veto on Sat 15th Jan 2011 18:54 UTC in reply to "RE[4]: Not exactly news"
Veto Member since:
2010-11-13

But is it the job of Firefox to shield from blatant (security) bugs in the underlying OpenGL API and neglecting the bugfree implementations in the process?

Rather more use and exposure would motivate the driver developers to fix their buggy drivers.

Perhaps a blacklist could be implemented notifying the users that their driver is buggy and Firefox will run unaccelerated? This would raise awareness without negatively affecting the "good systems".

Edit: I see you have already implemented a blacklist :-) But perhaps still notifying the user would be a good idea?

Edited 2011-01-15 18:58 UTC

Reply Score: 1

RE[6]: Not exactly news
by jacquouille on Sat 15th Jan 2011 23:13 UTC in reply to "RE[5]: Not exactly news"
jacquouille Member since:
2006-01-02

But is it the job of Firefox to shield from blatant (security) bugs in the underlying OpenGL API and neglecting the bugfree implementations in the process?


First of all, if an implementation is shown to be 'bug-free' then we'll gladly whitelist it in the next minor update.

And yes, it is our job to shield the user from buggy drivers, buggy system libraries, whatever. You don't want to have to wait for your OpenGL driver to be fixed to be able to use Firefox 4 without random crashes.

Rather more use and exposure would motivate the driver developers to fix their buggy drivers.


That would be nice, but we also need to be able to ship Firefox 4 ASAP without lowering our quality standards.

Perhaps a blacklist could be implemented notifying the users that their driver is buggy and Firefox will run unaccelerated? This would raise awareness without negatively affecting the "good systems".


This is information of a very technical nature that most users won't know how to act upon. For technical users, we *are* already printing this information in the terminal.

Edited 2011-01-15 23:14 UTC

Reply Score: 3

RE[5]: Not exactly news
by ndrw on Sun 16th Jan 2011 05:30 UTC in reply to "RE[4]: Not exactly news"
ndrw Member since:
2009-06-30

Bummer. I forgot WebGL is involved. That indeed complicates things "a bit" as you no longer fully control which parts of the OpenGL API get used.

Perhaps a more graceful solution would be to selectively white/blacklist parts of the WebGL API, or WebGL itself.

Reply Score: 2

RE[4]: Not exactly news
by renox on Sun 16th Jan 2011 12:27 UTC in reply to "RE[3]: Not exactly news"
renox Member since:
2005-07-06

Linux OpenGL implementations are not very different from what we (used to?) have with html+css+... implementations. They are a buggy, inconsistent mess but if you know the safe path across the minefield you can still produce a working product.


There's a major difference between both: if you use a sane (process oriented) design: a bug in an html(etc) component only crash a tab, or at worse the webbrowser (if poorly designed), a bug in an OpenGL driver can crash the *whole* computer and it is much, much more complex to debug, especially with hardware acceleration, and without hw acceleration OpenGL isn't very interesting!

Reply Score: 3

RE[3]: Not exactly news
by renox on Sun 16th Jan 2011 12:21 UTC in reply to "RE[2]: Not exactly news"
renox Member since:
2005-07-06

Bloat as in... memory usage, for example? Then you better look back, because Firefox actually needs less.

Chrome is consistently more responsive than FF on any computer that I've used (I suspect that it is thanks to its multi-process design).
That's probably why the GP said that and I agree with him.

Reply Score: 2

RE[4]: Not exactly news
by Neolander on Sun 16th Jan 2011 13:05 UTC in reply to "RE[3]: Not exactly news"
Neolander Member since:
2010-03-08

As I already said, there's a difference between unresponsive and being bloated.

Not all lean software is responsive. A single-threaded design where UI rendering is on the same thread as the number-crunching algorithms (like firefox's one, though thankfully they're working on that) is all it takes to make software unresponsive, no matter how well the rest is coded.

Edited 2011-01-16 13:12 UTC

Reply Score: 2

RE[2]: Not exactly news
by Neolander on Sat 15th Jan 2011 13:40 UTC in reply to "RE: Not exactly news"
Neolander Member since:
2010-03-08

Bloat is not exactly the best term when trying to make a firefox vs chrome comparison which advantages chrome. Firefox is now nearly the mainstream web browser which consumes the least amount of memory, AFAIK, while Chrome would be near the top with its multi-process model.

Being more responsive does not equate being less bloated. Vista x64 is probably very responsive on a machine based on one of those upcoming Bulldozer CPUs from AMD, backed by 16GB of DDR4 ram, 4 Vertex 2 SSDs in RAID 0, and a SLI of 4 high-end graphic cards. That wouldn't make it less bloated. Responsiveness depends on proper use of threads and having powerful hardware underneath, not so much on how heavy software is (except when you go the Adobe way and make software so heavy that your OS constantly has swap data in and out while you run it because your RAM is full).

Edited 2011-01-15 13:43 UTC

Reply Score: 3

RE[3]: Not exactly news
by aliquis on Sun 16th Jan 2011 14:28 UTC in reply to "RE[2]: Not exactly news"
aliquis Member since:
2005-07-23

But even if his word usage was wrong it's hard to argue that "it feels like crap after a while", regardless of how well it run all the crap which make it run like a turd.

Reply Score: 3

RE[2]: Not exactly news
by edvim on Sat 15th Jan 2011 15:49 UTC in reply to "RE: Not exactly news"
edvim Member since:
2010-03-12

Geez, so many fickle users out there. Most don't appreciate even a little that it was Firefox that stirred up the browser wars, when the alternatives were a sluggish Netscape and an anti-standards IE. So you're Firefox is 'sluggish'? Sounds like you have other issues on your system too. My primary box is a six year old P4 and Firefox launches/views pages pretty well.
Also don't forget Google only offered Chrome to Windows users for quite a while, leaving Linux users with a somewhat supported 'build your own' option of Chromium. Their excuse was a public statement about how it was too difficult and problematic to offer Linux or OS X versions. Yet Firefox and Opera have been popping out concurrent versions for multiple platforms for years. (OK, well Opera has been concurrent version-wise only recently, but their developers are too busy innovating unique ideas that other browsers pick up on.)

Reply Score: 4

RE[3]: Not exactly news
by avgalen on Sun 16th Jan 2011 02:07 UTC in reply to "RE[2]: Not exactly news"
avgalen Member since:
2010-09-23

So on a topic about a mature multi-platform browser that is having a big problem with Linux....

....you complain that a new browser didn't immediately provide a well-functioning browser for Linux?

ooooh, the irony. Developing Multi-Platform is very difficult especially if you cannot rely on the underlying platform or have cutting edge technology requirements or other platform dependent bits.

Reply Score: 1

RE: Not exactly news
by fran on Sat 15th Jan 2011 21:14 UTC in reply to "Not exactly news"
fran Member since:
2010-08-06

They call it lobbying.
The more media presence the problem gets, the more chances on a speedier solution.

Squeekiest wheel gets the grease.

Reply Score: 3

White list ?
by torturedutopian on Sat 15th Jan 2011 11:43 UTC
torturedutopian
Member since:
2010-04-24

Oh no... But that was expected.

Couldn't the openGL mode be enabled on a whitelist basis ? I thought the Nividia proprietary drivers were pretty good, as far as 3D is concerned (you can use pretty recent games under Wine for instance) ?

I thought the situation was pretty good, with the Video Acceleration API, compositing & 3D accel being pretty good with nv cards.

Reply Score: 2

RE: White list ?
by Lennie on Sat 15th Jan 2011 12:52 UTC in reply to "White list ?"
Lennie Member since:
2007-09-22

First thing we need is a good way to test.

I think this is the way you can help test it:

http://jagriffin.wordpress.com/2010/08/30/introducting-grafx-bot/
https://addons.mozilla.org/en-US/firefox/addon/grafx-bot/

But on my machine it did not seem to want to be enabled or would just crash and burn ( I used a second firefox profile to even get that far :-( ).

Here are some of the results:

http://jagriffin.wordpress.com/2010/09/22/grafxbot-results-update/

UPDATE: ok, just tried again and now I do seem to be able to do a proper test run with a seperate profile (it is actually recommened now).

Edited 2011-01-15 12:59 UTC

Reply Score: 3

RE[2]: White list ?
by jacquouille on Sat 15th Jan 2011 15:53 UTC in reply to "RE: White list ?"
jacquouille Member since:
2006-01-02

Also, for WebGL (which is enabled on linux if your driver is whitelisted), the best way to test is to run the official WebGL conformance test suite:

https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/te...

Click 'run tests', copy your results in a text file and attach it to this bug:
https://bugzilla.mozilla.org/show_bug.cgi?id=624593
and of course tell us the precise driver, version, Xorg version, kernel version that you're using.

If a driver can pass almost all these tests (and doesn't crash running them...) then it's quite probably good enough and we should try to whitelist it!

Looking forward to enabling the whitelist once we get more data. It must be said that the above WebGL test suite is AFAIK the first time that Khronos publishes a complete, public test suite for a *GL standard. I hope to convince developers of GL drivers to use it to test their drivers against.

Reply Score: 5

RE: White list ?
by jacquouille on Sat 15th Jan 2011 15:47 UTC in reply to "White list ?"
jacquouille Member since:
2006-01-02

Yep, the NVIDIA proprietary driver is pretty good for us, that's why it's whitelisted: see my other comment on this story, http://www.osnews.com/permalink?458166

Reply Score: 2

Comment by mtzmtulivu
by mtzmtulivu on Sat 15th Jan 2011 14:39 UTC
mtzmtulivu
Member since:
2006-11-14

what about on proprietary drivers from nvidia and ati? they are buggy too?

is there a way to manually turn them on/off?

Reply Score: 2

RE: Comment by mtzmtulivu
by jacquouille on Sat 15th Jan 2011 15:43 UTC in reply to "Comment by mtzmtulivu"
jacquouille Member since:
2006-01-02

NVIDIA proprietary driver is not buggy, for what we are doing (which is pure OpenGL). We are enabling hardware acceleration on X with the NVIDIA proprietary driver. So the title of this OSNews story is inaccurate.

The FGLRX driver is crashier, it's blacklisted at the moment, this could change (everything hopefully will change :-) )

Yes, you can turn the whole driver blacklisting off by defining the MOZ_GLX_IGNORE_BLACKLIST environment variable. Just launch firefox with this command (you can use it in the properties of your desktop icon, too):

MOZ_GLX_IGNORE_BLACKLIST=1 firefox

We did this blacklisting to put and end to the endless series of linux crashes that were caused by buggy graphics drivers, and were causing lots of grief among linux users ("firefox 4 is crashy!"). This was the top reason for crashiness on linux.

We are looking forward to un-blacklisting drivers as soon as they get good enough, see the discussion in this bug (scroll down past the first comments sent by an angry user):

https://bugzilla.mozilla.org/show_bug.cgi?id=624593

Edited 2011-01-15 15:44 UTC

Reply Score: 9

RE[2]: Comment by mtzmtulivu
by anda_skoa on Sat 15th Jan 2011 18:43 UTC in reply to "RE: Comment by mtzmtulivu"
anda_skoa Member since:
2005-07-07

Yes, you can turn the whole driver blacklisting off by defining the MOZ_GLX_IGNORE_BLACKLIST environment variable. Just launch firefox with this command (you can use it in the properties of your desktop icon, too):

MOZ_GLX_IGNORE_BLACKLIST=1 firefox


While I really like the fact that this is a runtime and not a build time choice, why do it as an environment variable and not in about:config?

An environment variable requires that .desktop files for menus and other forms of UI launcher have to be modified or some system or user level environment script be modified.

Especially since I read in one of you other comments that another related feature is switchable through about:config

Reply Score: 3

RE[3]: Comment by mtzmtulivu
by jacquouille on Sat 15th Jan 2011 18:46 UTC in reply to "RE[2]: Comment by mtzmtulivu"
jacquouille Member since:
2006-01-02

Really it's just because we're in a rush now and an environment variable switch can be implemented in 1 line of code (while an about:config switch is, say, 5 lines of code ;-) )

Heck, ideally I would like the existing about:config force-enable switches to circumvent the blacklist. But that was harder to implement due to where the GLX blacklisting is implemented.

Eventually yes it'll be in about:config.

Edited 2011-01-15 18:49 UTC

Reply Score: 2

RE[4]: Comment by mtzmtulivu
by anda_skoa on Sat 15th Jan 2011 20:55 UTC in reply to "RE[3]: Comment by mtzmtulivu"
anda_skoa Member since:
2005-07-07

Really it's just because we're in a rush now and an environment variable switch can be implemented in 1 line of code (while an about:config switch is, say, 5 lines of code ;-) )


I see ;)
Still, if it is about 5 lines, it might result in more people reporting whitelistable combinations.


Eventually yes it'll be in about:config.


Excellent!

It is good to see that another (additional to KDE) big free software application provider is now running into driver bugs holding back implementations of current state of the art interfaces.

Maybe you can share notes on whitelisted/blacklisted combinations with the developers of KWin. They've been in this very situation for a couple of months now and might have data which could be useful to you as well.

Reply Score: 3

RE[5]: Comment by mtzmtulivu
by jacquouille on Sat 15th Jan 2011 23:17 UTC in reply to "RE[4]: Comment by mtzmtulivu"
jacquouille Member since:
2006-01-02

Actually I've gotten in touch with them already, asking for that ;)

http://lists.kde.org/?l=kwin&m=129231532921117&w=2

They didn't have a blacklist or whitelist already; what came out of this conversation is that we're doing different stuff than they are.

Reply Score: 2

RE[6]: Comment by mtzmtulivu
by anda_skoa on Sun 16th Jan 2011 12:34 UTC in reply to "RE[5]: Comment by mtzmtulivu"
anda_skoa Member since:
2005-07-07

Actually I've gotten in touch with them already, asking for that ;)

http://lists.kde.org/?l=kwin&m=129231532921117&w=2

They didn't have a blacklist or whitelist already; what came out of this conversation is that we're doing different stuff than they are.


Ah, interesting read!

I just thought about KWin because their problems with driver status also had made quite some waves, but indeed their needs don't compare to yours a lot.

How about projects which use OpenGL more than for compositing? GNOME Shell or GNOME/KDE games?

Do you plan on making your blacklist/whitelist public (on some developer page) so other developers could re-use it?

Reply Score: 2

RE[7]: Comment by mtzmtulivu
by jacquouille on Sun 16th Jan 2011 16:16 UTC in reply to "RE[6]: Comment by mtzmtulivu"
jacquouille Member since:
2006-01-02


How about projects which use OpenGL more than for compositing? GNOME Shell


I expect GNOME Shell to be quite similar to KWin in this respect.


or GNOME/KDE games?


You don't usually bother making a driver blacklist for a game. If a game crashes because of a driver, so be it.


Do you plan on making your blacklist/whitelist public (on some developer page) so other developers could re-use it?


Of couse, this is open source :-)

The (currently very simple, just allowing NVIDIA) OpenGL-on-X driver blacklist is currently implemented there:

http://hg.mozilla.org/mozilla-central/file/f9f48079910f/gfx/thebes/...

Reply Score: 2

RE[5]: Comment by mtzmtulivu
by mat69 on Sun 16th Jan 2011 00:33 UTC in reply to "RE[4]: Comment by mtzmtulivu"
mat69 Member since:
2006-03-29

The interesting part is that KWin did not run into driver bugs.
The driver bugs ran into KWin.

For that to understand one has to look to KDE 4.0:
KWin (!) worked quite good on most drivers.
Yet in KDE 4.5 suddenly things did not work. Why?
No, not because of KWin, the code in those areas was mostly untouched from 4.0!
Instead drivers suddenly said they did support features while this was not the case.

I.e. 4.0: drivers honest --> few KWin problems
4.5 drivers lying --> many problems.

And the only way around this for KWin is blacklisting which is a lot of work.

Reply Score: 4

If it's that bad
by SlackerJack on Sat 15th Jan 2011 15:30 UTC
SlackerJack
Member since:
2005-11-12

Then just support the the most stable driver then. I'm pretty sure most people use the NVIDIA binary, so just detect it and enable 3D support.

Adobe supports VDPAU in their Flash beta, so it's time to stop making excuses and support what's established, stable and working, which the NVIDIA binary is. AMD binary driver too.

Reply Score: 4

RE: If it's that bad
by jacquouille on Sat 15th Jan 2011 16:17 UTC in reply to "If it's that bad"
jacquouille Member since:
2006-01-02

That's exactly what we are doing ;-) the NVIDIA proprietary driver is whitelisted at the moment. So you get WebGL right away. If you want accelerated compositing too (at the risk of losing the benefit of XRender) go to about:config and set layers.acceleration.force-enabled to true.

Reply Score: 2

Reverse engeneering sucks
by spiderman on Sat 15th Jan 2011 15:47 UTC
spiderman
Member since:
2008-10-23

If the manufacturers did release proper specs for their cards maybe we could have first class drivers for Xorg.

Reply Score: 5

RE: Reverse engeneering sucks
by UltraZelda64 on Sat 15th Jan 2011 16:07 UTC in reply to "Reverse engeneering sucks"
UltraZelda64 Member since:
2006-12-05

Highly likely... sad but true. The blame probably could be directed toward the graphics hardware manufacturers with far more accuracy and truth than the X.org and open-source driver developers.

Reply Score: 1

RE[2]: Reverse engeneering sucks
by BluenoseJake on Sat 15th Jan 2011 17:03 UTC in reply to "RE: Reverse engeneering sucks"
BluenoseJake Member since:
2005-08-11

Or you could put the blame where it really lies, with Xorg. Nvidia's drivers work better because they basically replace half the Xorg stack.

This state of affairs is rather retarded. If to get good performance and stability you have to replace half the underlying graphics stack, then the graphics stack must be the grand master of suck.

Reply Score: 7

RE[3]: Reverse engeneering sucks
by spiderman on Sat 15th Jan 2011 17:45 UTC in reply to "RE[2]: Reverse engeneering sucks"
spiderman Member since:
2008-10-23

No, sorry but you can't blame Xorg. It is an open source project and anyone can contribute. The quesion is why Nvidia does not contribute to Xorg instead of replacing its stack into its proprietary driver?
Xorg is an extremely complex piece of software but it is also extremely capable. It is understandable that it has more bugs than MacOS X or Windows graphic stack. MS Office has more bugs than Notepad. Xorg just need more developers and cooperation from hardware manufacturers.
What if you manufacture a good card but the driver for it sucks? Your product sucks overall. Manufacturers need to put more effort into the software part on linux. They will loose customers in the long run if they don't.

Reply Score: 2

BluenoseJake Member since:
2005-08-11

uh, I am certainly blaming Xorg. It's overy complex, and has too many features that have no real place in todays computing environment. I use Linux everyday, and Xorg is the weak spot in the whole OS, it's slow, it crashes (not often, but Windows 7 has never crashed on the same computer, nor did Vista). There is a reason that Red Hat and Ubuntu are looking at Wayland, and that is simplicity, reliability and speed.

Reply Score: 3

RE[5]: Reverse engeneering sucks
by spiderman on Sat 15th Jan 2011 19:00 UTC in reply to "RE[4]: Reverse engeneering sucks"
spiderman Member since:
2008-10-23

Wait. You are too quick to dismiss Xorg features as irrelevant. They are relevant to many people. Wayland may be a nice alternative for you but it is still far from being as stable as Xorg. Xorg has problems but it has many strengths. You would not be using it if it had more problems that useful features.
For me there is no alternative to Xorg because I need network transparency. Yes, network transparency is relevant, today. On Windows you have to use a hack like VNC or a product like RDP which both suck or buy Citrix which is also a hack, costs an arm and sucks. When you are used to Xorg and NX this a huge step back.

Reply Score: 5

BluenoseJake Member since:
2005-08-11

Both RDP and VNC is much more usable than Xorgs network transparency, both over wireless or the Internet, where Xorg is unusably slow. RDP even supports 3D on Windows 7 and Vista.

People don't need network transparency, people need network access, which Windows does 100% better than Xorg, and VNC does a better job. FreeNX proves that Linux can provide a proper, usable remote GUI environment, but holding on to this broken functionality is part of the problem with Xorg. You can even use RDP with Linux, using xRDP, which is much more usable than network transparency.

Edited 2011-01-15 19:10 UTC

Reply Score: 3

RE[7]: Reverse engeneering sucks
by spiderman on Sat 15th Jan 2011 19:37 UTC in reply to "RE[6]: Reverse engeneering sucks"
spiderman Member since:
2008-10-23

I could not disagree more.

VNC and RDP do not replace Xorg. Only Citrix does but poorly. With Xorg you can have an application server and administer your applications in a single place. Just let your users connect and use their applications as it was local. They can resize windows, put them next to their local window, cut and paste, everything. It is integrated into their desktop. They don't need another desktop with poor quality graphics and scrollbars.

FreeNX is nice but it does not replace Xorg either. It depends on it, it is a layer on top of Xorg.

Reply Score: 3

RE[7]: Reverse engeneering sucks
by gilboa on Sat 15th Jan 2011 20:26 UTC in reply to "RE[6]: Reverse engeneering sucks"
gilboa Member since:
2005-07-06

People don't need network transparency, people need network access


I assume that you have actual numbers of back this claim, as opposed to simply making things up, right?

- Gilboa
(Ignoring for a second, that network transparency has -nothing- to do with X.org performance, as local->local display doesn't use the same network-aware code paths...)

Edited 2011-01-15 20:27 UTC

Reply Score: 5

RE[6]: Reverse engeneering sucks
by ndrw on Sun 16th Jan 2011 06:29 UTC in reply to "RE[5]: Reverse engeneering sucks"
ndrw Member since:
2009-06-30

X is no longer as network transparent as it used to be, unfortunately.

It was perhaps the case 20 years ago when most of the computer graphics, font rendering etc. was done on the server side. Now we have xshm, xrender which enable reasonably fast client side rendering on local machines but no longer work across the network (at least not if you care about the user experience).

The network itself has changed too. Over the years bandwidth has increased dramatically but latency hasn't changed that much. (Hard)wired networks are now often replaced with wifi connections, VPNs and other ad-hoc networks.

X is still able to deliver its promise on LANs (ideally with NIS/NFS) and with some classes of applications (e.g. engineering apps using 2D vector rendering). But in most other applications, even if the program manages to start up properly, you still have to be very aware of the fact it is not running locally (if only for performance and reliability reasons).

Rdesktop and VNC chose different way: if it is no longer possible to make the graphics rendering network transparent lets make it obvious and put the user in control. Thus, having remote session in a separate desktop is GOOD - it makes it easy to find out which application is running where. Having a possibility to disconnect from and reconnect to a remote session (and thus move your own existing session between computers) is GOOD. Using protocols that benefit from increased bandwidth and don's stress the network latency (asynchronous transfer of bitmaps, video) is GOOD. Having additional features (audio redirecting, file transfer) is GOOD.

After all, with a network you can do much more than just open a window from a machine A on a machine B.

Reply Score: 6

RE[4]: Reverse engeneering sucks
by aaronb on Sat 15th Jan 2011 18:44 UTC in reply to "RE[3]: Reverse engeneering sucks"
aaronb Member since:
2005-07-06

I think you hit the nail on the head. X is extremely complex and extremely capable and so it can take an extreme amount of effort and time to have stable drivers.

IMHO we should think about making X and or Wayland as simple and efficient as possible while still having relevant features.

Just to clarify I am not blaming this all on X, but complexity does not help.

Edited 2011-01-15 18:46 UTC

Reply Score: 2

RE[3]: Reverse engeneering sucks
by Carewolf on Sun 16th Jan 2011 15:01 UTC in reply to "RE[2]: Reverse engeneering sucks"
Carewolf Member since:
2005-09-08

NVidia works better for OpenGL, and only OpenGL, because that is what they focus on. Even the ancient VESA driver is faster and more stable the NVidia drivers when it come to 2D graphics, the nouveau driver is somewhere between 100 and 1000x times faster while using 100 times less memory (XOrg with nvidia: 300Mbyte resident, XOrg with nouveau: 22Mbyte, where almost all is the binaries).

Reply Score: 4

RE[4]: Reverse engeneering sucks
by tyrione on Tue 18th Jan 2011 18:58 UTC in reply to "RE[3]: Reverse engeneering sucks"
tyrione Member since:
2005-11-21

NVidia works better for OpenGL, and only OpenGL, because that is what they focus on. Even the ancient VESA driver is faster and more stable the NVidia drivers when it come to 2D graphics, the nouveau driver is somewhere between 100 and 1000x times faster while using 100 times less memory (XOrg with nvidia: 300Mbyte resident, XOrg with nouveau: 22Mbyte, where almost all is the binaries).


Sorry, but the nvidia 260.19.21-1 on Debian sits at 136MB presently.

4 tabs in Chrome Unstable, 60+ files open in Kate, 3 tabs in Konsole, Inkscape Trunk open as well, plus the usual crap running in the background for KDE 4.5.x.

Reply Score: 2

RE: Reverse engeneering sucks
by Oliver on Sat 15th Jan 2011 16:18 UTC in reply to "Reverse engeneering sucks"
Oliver Member since:
2006-07-15

We have proper specs for most of the AMD/ATI chips. There are specs for Intel chips and drivers from Intel. Is this of any help? No.

Reply Score: 1

RE[2]: Reverse engeneering sucks
by jabjoe on Sat 15th Jan 2011 18:47 UTC in reply to "RE: Reverse engeneering sucks"
jabjoe Member since:
2009-05-06

It does help. The open drivers for both of those offer 3D as standard. The open drivers for NVidia only offer 'experimental' 3D, after much blood, sweat and tears of reverse engineering. The Gallium3D/DRM changes are not complete yet, as we get to the optimizing end of thigns, it's going to get interesting. Phoronix is quite a good place to keep up.

Reply Score: 3

RE[2]: Reverse engeneering sucks
by tyrione on Sun 16th Jan 2011 02:48 UTC in reply to "RE: Reverse engeneering sucks"
tyrione Member since:
2005-11-21

We have proper specs for most of the AMD/ATI chips. There are specs for Intel chips and drivers from Intel. Is this of any help? No.


You'd think it would help, especially with Intel seeing as the X developers work for them.

Reply Score: 2

RE: Reverse engeneering sucks
by nt_jerkface on Sat 15th Jan 2011 17:27 UTC in reply to "Reverse engeneering sucks"
nt_jerkface Member since:
2009-08-26

You would have better gpu drivers if Linus provided a stable ABI.

But working with third parties has never been a goal of Linus anymore than creating a desktop OS.

He also doesn't seem to care about creating a server OS that meets the needs of the market given how often the unstable abi has broken VMWare.

Because having a stable abi in a Nix is just unthinkable...like OSX, Solaris, oh wait nevermind. Where are all the benefits from the unstable abi? How has Linux leaped passed other Nix systems?

Reply Score: 4

nt_jerkface Member since:
2009-08-26

I read that years ago.

You didn't answer my question.

Reply Score: 2

RE[4]: Reverse engeneering sucks
by jabjoe on Sun 16th Jan 2011 08:41 UTC in reply to "RE[3]: Reverse engeneering sucks"
jabjoe Member since:
2009-05-06

read it again

Reply Score: 1

lucas_maximus Member since:
2009-08-18

And many people think it is wrong.

If Linux had to ensure that it preserve a stable source interface, a new interface would have been created, and the older, broken one would have had to be maintained over time, leading to extra work for the USB developers


This is not how responsible devs work. You tell them you are supporting the interface until date X and mark it as depreciated. i.e. I was fixing some Java code I wrote 3 years ago for Java 1.3 ... added the fixes and compiled I was warned things were to be depreciated in future version ... so I updated accordingly.

Simple, get your kernel driver into the main kernel tree (remember we are talking about GPL released drivers here, if your code doesn't fall under this category, good luck, you are on your own here, you leech <insert link to leech comment from Andrew and Linus here>.)


This here is basically a big middle finger up to any driver dev. It is basically "GPL or Else".

A number of times this has caused internal kernel interfaces to be reworked to prevent the security problem from occurring. When this happens, all drivers that use the interfaces were also fixed at the same time, ensuring that the security problem was fixed and could not come back at some future time accidentally.


Any change to code of an is a risk, It can regress functionality and/or introduce new bugs ... any 1st years software engineer knows this.

Reply Score: 2

RE[6]: Reverse engeneering sucks
by Nth_Man on Thu 20th Jan 2011 00:30 UTC in reply to "RE[5]: Reverse engeneering sucks"
Nth_Man Member since:
2010-05-16

This here is basically a big middle finger up to any driver dev.

It seems like if you didn't really read the http://lxr.linux.no/#linux+v2.6.37/Documentation/stable_api_nonsens... text.

Reply Score: 1

lucas_maximus Member since:
2009-08-18

I did ... you obviously didn't read my response ... It forces you to GPL your driver .. you have to be part of the club.

Reply Score: 2

Nth_Man Member since:
2010-05-16

There are some things that are not easy to be talked about. I'll try to put the results of past conversations:

A binary-only driver is very bad news, and should be shunned. That proprietary software doesn't respect users' freedom, users are not free to run the program as they wish, study the source code and change it so that the program do what they wish, and to redistribute copies with or without changes. Without these freedoms, the users can not control the software or control their computing. As Stallman says: without these freedoms, the software controls the users.

Also, as Rick Moen said: binary-only drivers are typically buggy for lack of peer review, poorly maintained, not portable to newer or different CPU architectures, prone to breakage with routine kernel or other system upgrades, etc.

In the article of http://www.kroah.com/log/linux/stable%5Fapi%5Fnonsense.html it's explained that:
Linux does not have a binary kernel interface, nor does it have a fixed kernel interface. Please realize that the in kernel interfaces are not the kernel to userspace interfaces. The kernel to userspace interface is the one that application programs use, the syscall interface. That interface is _very_ stable over time, and will not break.

The author of the article says that has old programs that were built on a pre 0.9something kernel that still works just fine on the latest 2.6 kernel release. This interface is the one that users and application programmers can count on being stable.

That article reflects the view of a large portion of Linux kernel developers: the freedom to change in-kernel implementation details and APIs at any time allows them to develop much faster and better.

Without the promise of keeping in-kernel interfaces identical from release to release, there is no way for a binary kernel module like VMWare's to work reliably on multiple kernels.

As an example, if some structures change on a new kernel release (for better performance or more features or whatever other reason), a binary VMWare module may cause catastrophic damage using the old structure layout. Compiling the module again from source will capture the new structure layout, and thus stand a better chance of working -- though still not 100%, in case fields have been removed or renamed or given different purposes.

If a function changes its argument list, or is renamed or otherwise made no longer available, not even recompiling from the same source code will work. The module will have to adapt to the new kernel. Since everybody (should) have source and (can find somebody who) is able to modify it to fit. "Push work to the end-nodes" is a common idea in both networking and free software: since the resources [at the fringes]/[of the developers outside the Linux kernel] are larger than the limited resources [of the backbone]/[of the Linux developers], the trade-off to make the former do more of the work is accepted.

On the other hand, Microsoft has made the decision that they must preserve binary driver compatibility as much as possible -- they have no choice, as they are playing in a proprietary world. In a way, this makes it much easier for outside developers who no longer face a moving target, and for end-users who never have to change anything. On the downside, this forces Microsoft to maintain backwards-compatibility, which is (at best) time-consuming for Microsoft's developers and (at worst) is inefficient, causes bugs, and prevents forward progress.

ABI compatibility is a mixed bag. On one hand, it allows you to distribute binary modules and drivers which will work with newer versions of the kernel (with the already told long-term problems of proprietary software). On the other hand, it forces kernel programmers to add a lot of glue code to retain backwards compatibility. Because Linux is open-source, and because kernel developers even whether they're even allowed, the ability to distribute binary modules isn't considered that important. On the upside, Linux kernel developers don't have to worry about ABI compatibility when altering datastructures to improve the kernel. In the long run, this results in cleaner kernel code.

Edited 2011-01-15 18:52 UTC

Reply Score: 7

nt_jerkface Member since:
2009-08-26

A binary-only driver is very bad news, and should be shunned.


Bad news for who? Users? They just want something that works. The current system already provides plenty of bad news.

That proprietary software doesn't respect users' freedom

Freedom as defined by Stallman's newspeak that only exists to push his agenda.


binary-only drivers are typically buggy for lack of peer review

Everyone in this thread agrees that the proprietary nvidia drivers are the best.

On the other hand, Microsoft has made the decision that they must preserve binary driver


Why does Microsoft have to be pulled into this? Why not limit the discussions to Unix systems that have a stable ABI?

Tell me where Linux would have been held back if they kept an stable abi with a 3 year cycle. FreeBSD keeps a stable abi with minor releases, so be specific and show in comparison how Linux has had an advantage.

Reply Score: 3

jabjoe Member since:
2009-05-06

Bad news for who? Users?

Yes.

They just want something that works.


Thats why they don't want a stable kernel api.

You think you want a stable kernel interface, but you really do not, and you don't even know it. What you want is a stable running driver, and you get that only if your driver is in the main kernel tree.


Everyone in this thread agrees that the proprietary nvidia drivers are the best.


I certainly don't. They are the only driver on my system that crashes, regulary. They don't keep up with X developments, so you are left behind. I can't wait to not have to use them.

Why not limit the discussions to Unix systems that have a stable ABI?


Fine. Which Unix system support the most devices and architecture? (In fact more then any OS, ever.)

Dude, really, read the doc, it covers all that you are bringing up.

Reply Score: 1

ba1l Member since:
2007-09-08

Tell me where Linux would have been held back if they kept an stable abi with a 3 year cycle. FreeBSD keeps a stable abi with minor releases, so be specific and show in comparison how Linux has had an advantage.


* LONG POST * (basically, the whole ABI-stability thing is a convenient thing to blame, but is just a distraction from the real problem).

I don't see that FreeBSD has gained any advantages by having a more stable ABI than Linux. In terms of graphics drivers, it has exactly the same problems that Linux has, for exactly the same reasons.

Those reasons are that Xorg is full of legacy crap that nobody uses anymore, which still needs to remain fully supported (and no, I'm not talking about network transparency). This makes Xorg far more difficult to maintain and improve without breaking everything, and slows development down.

Worse still - the newer stuff that people actually use / want to use doesn't work properly. Either because it's not had enough time spent on it, or because it interacts poorly with the legacy crap.

Not having a stable ABI doesn't hurt the open-source side of things. Xorg developers have to problems keeping up-to-date with Linux (and the FreeBSD developers have no problems keeping up with the latest DRI2 changes from Linux either). So, the only group it could possibly hurt are the closed-source guys. That'd be Nvidia and ATI, basically. Let's see what Nvidia have to say...

http://www.phoronix.com/scan.php?page=article&item=nvidia_qa_linux&...

According to the lead of Nvidia's Linux / Unix driver team:

- The drivers are focused on workstation graphics (CAD, 3D modelling and animation) first, because that's where Nvidia make their money.
- Desktop or gaming features are added if they have spare time, but are a much lower priority.
- The driver is almost entirely cross-platform, with most of it being shared between Linux, FreeBSD, Solaris, Mac OS X, and Windows. The Linux-specific kernel module is tiny.
- The lack of a stable kernel ABI is "not a large obstacle for us", and keeping the Linux-specific driver up to date "requires occasional maintenance... but generally is not too much work".

So, Nvidia don't seem to think it's a problem. I think they'd know better than you do.

As for other drivers... I don't see the problem. Nearly everything in a modern PC will run just fine with no special drivers. On Windows, you use Microsoft's drivers, on Mac OS X you use Apple's drivers (and they even work on general PC hardware with few problems), and on Linux you just use the standard kernel drivers.

The only exceptions are printers, video card drivers, and wireless network drivers.

Printer drivers are user-space (even on Windows these days), so the question of a stable kernel ABI is irrelevant. Besides, Linux and Mac OS X use the same printer driver system (CUPS, which is owned by Apple), yet only HP bother to provide Linux drivers.

As for wireless network cards... the hardware manufacturers can not be trusted to make drivers that don't suck, for any OS. The in-kernel drivers for wireless devices kick the ass of any vendor-supplied Linux driver, or of the Windows drivers running through NDISWrapper.

One other point - remember the problems Microsoft had with third-party drivers on Windows? How the number one cause of BSODs was Nvidia's video driver? How much trouble lousy third-party drivers caused?

To solve this problem, Microsoft had to develop a huge range of static test suites, and a fairly comprehensive driver testing regime. They then had to force hardware manufacturers to use these tools and certify their drivers, by adding scary warnings about unsigned drivers. Later on, they even removed support for non-certified drivers entirely.

The Linux community can not do that, for a whole heap of licensing, technical, and logistical reasons. Plus, we don't have the money, and we don't have the clout to force hardware manufacturers to follow the rules. So they won't - they just won't release Linux drivers at all.

Reply Score: 12

nt_jerkface Member since:
2009-08-26

I don't see that FreeBSD has gained any advantages by having a more stable ABI than Linux. In terms of graphics drivers, it has exactly the same problems that Linux has, for exactly the same reasons.


FreeBSD does not have even close to the same desktop marketshare or mindshare as Linux and as such does not get the same amount of attention from hardware companies. The point of bringing up FreeBSD is that it has had a stable abi for minor releases and yet no one has told me how Linux was able to leap ahead in terms of specific features that could not wait a minor release cycle.


Let's see what Nvidia have to say...

Your link doesn't work. Try this one:
The Challenge In Delivering Open-Source GPU Drivers
http://www.phoronix.com/scan.php?page=news_item&px=ODk3MA
For proper Sandy Bridge GPU support under Linux you are looking at the Linux 2.6.37 kernel, Mesa 7.10, and xf86-video-intel 2.14.0 as being the critical pieces of the puzzle while also an updated libdrm library to match and then optionally there is the libva library if wishing to take advantage of the VA-API video acceleration

What a mess.


So, Nvidia don't seem to think it's a problem. I think they'd know better than you do.

So you cherry picked a few positive quotes. Would it have taken more or less labor for them to provide a binary driver for a 4 year interface or their shim / open source shenanigans? Actions speak louder than words and by their actions they clearly prefer to release binary drivers for stable interfaces. Users prefer binary drivers to having an update break the system. Users just want something that works.


The only exceptions are printers, video card drivers, and wireless network drivers.


Wait so you are saying everything else works fine in Linux? What about webcams, sound cards and bluetooth? No complaints about Audigy then?


Printer drivers are user-space (even on Windows these days), so the question of a stable kernel ABI is irrelevant.


The question is obviously related to video card drivers and most of your long winded post is irrelevant. I asked a simple question that you haven't been able to answer.


As for wireless network cards... the hardware manufacturers can not be trusted to make drivers that don't suck, for any OS.


Bullshit, I can list numerous network cards that have excellent customer ratings. Intel cards especially have been stellar for me.

How the number one cause of BSODs was Nvidia's video driver? How much trouble lousy third-party drivers caused?


No I don't recall that actually. If you tally up video card driver issues then Linux definitely comes out on top. There are endless cases of video card drivers being broke in Linux. That requires more than a restart.

Reply Score: 2

No it isnt Member since:
2005-11-14

Your link doesn't work. Try this one:
The Challenge In Delivering Open-Source GPU Drivers
http://www.phoronix.com/scan.php?page=news_item&px=ODk3MA
For proper Sandy Bridge GPU support under Linux you are looking at the Linux 2.6.37 kernel, Mesa 7.10, and xf86-video-intel 2.14.0 as being the critical pieces of the puzzle while also an updated libdrm library to match and then optionally there is the libva library if wishing to take advantage of the VA-API video acceleration

What a mess.

You don't understand what you're talking about. What the above quote says, is that Ubuntu 11.04 should support Sandy Bridge out of the box.

Reply Score: 2

Nth_Man Member since:
2010-05-16

" A binary-only driver is very bad news, and should be shunned.

Bad news for who? Users?
"
Yes, for users, not for monopolists. You just have to think long-term.

Something that forces users to depend on a company which has the target of sucking the biggest amount of money from them... works for the company. You know, Bill Gates got to be the richest man and Microsoft got to be a convicted monopolist (at least three times).

"That proprietary software doesn't respect users' freedom

Freedom as defined by Stallman's newspeak that only exists to push his agenda.
"
You can try to modify free software to your needs... and you can try to modify proprietary software to your needs, and see where we have our hands tied :-(

"binary-only drivers are typically buggy for lack of peer review

Everyone in this thread agrees that the proprietary nvidia drivers are the best.
"
The word "typically" doesn't mean "always", that's why people use the word "typically" instead of the word "always". Also when Nvidia stops mantaining a driver (in Windows, Linux, etc) we start seeing what happens, so we have to think long-term.

"On the other hand, Microsoft has made the decision that they must preserve binary driver

Why does Microsoft have to be pulled into this? Why not limit the discussions to Unix systems that have a stable ABI?
"
It's to show what happens with the "choose ABI" alternative.

Tell me where Linux would have been held back if they kept an stable abi with a 3 year cycle.

If there are problems in this thread with elementary facts, imagine if we start speculating.

Reply Score: 0

Nth_Man Member since:
2010-05-16

Modding down the parent comment... without giving an argument... That way reasoning is avoided?

Reply Score: 1

nt_jerkface Member since:
2009-08-26

You can try to modify free software to your needs... and you can try to modify proprietary software to your needs, and see where we have our hands tied :-(


I really like how GPL advocates proselytize the basics even on a site called OSNEWS, even to someone who clearly knows about Linux and who Stallman is. Reminds me of Mormons who knock on doors and ask if you have heard of Jesus. Jesus Christ? No I have never heard of him. I've lived in America all my life and have not heard of the guy. Is he somehow related to that holiday...whats it called.... Santaday or something?

It's to show what happens with the "choose ABI" alternative.

OSX has a stable ABI and has clearly not been as successful as Linux on the desktop.


If there are problems in this thread with elementary facts, imagine if we start speculating.

I see you can't answer the question either.

Perhaps I should write a formal proposal and see if the Linux devs can answer it. Stable_abi_nonsense was written years ago, so where are the benefits? Which specific feature could not have waited a 3 year stable abi cycle?

Reply Score: 2

smitty Member since:
2005-10-13

OSX has a stable ABI and has clearly not been as successful as Linux on the desktop.

OSX also has a billion dollar marketing campaign, and limits to run on only special chosen hardware because they either don't want to or can't support as much hardware as linux can. Poor choice of example, there.

Some of the BSDs have stable APIs, and that hasn't seemed to help them be successful. I'm sure your argument would be that they don't have enough market share for that to make a difference. And you're right - where you're wrong is thinking that Linux would be any different. Linux doesn't have enough marketshare for hardware companies to be very interested in it either, and for those that are the changing ABI is a relatively small inconvenience.

Perhaps I should write a formal proposal and see if the Linux devs can answer it. Stable_abi_nonsense was written years ago, so where are the benefits? Which specific feature could not have waited a 3 year stable abi cycle?

If having a stable API was that important, the distros would just freeze on a particular kernel/X/etc. version for 3 years while all the devs kept working on newer code that could change. In fact, that's exactly how corporate support is handled. So, why doesn't everything work that way?

It's not difficult to figure out - general Linux users are more interested in getting the new features that the changing API provides as soon as possible, and are willing to give up the stable API which could get them more binary drivers on old distros. Because this is OSS, there is no way to control what users pick - you can't simply dictate that people use the old distros, because they are free to grab whatever they want, and they've chosen otherwise.

Edited 2011-01-16 21:41 UTC

Reply Score: 3

nt_jerkface Member since:
2009-08-26

OSX also has a billion dollar marketing campaign, and limits to run on only special chosen hardware because they either don't want to or can't support as much hardware as linux can.

Poor choice of example, there.


Right but FreeBSD has a smaller budget than Linux but can maintain a stable ABI for minor releases.

Some of the BSDs have stable APIs, and that hasn't seemed to help them be successful.


Linux drew popularity by being successful on the server where the unstable abi is less of an issue. FreeBSD has numerous advantages but Linux has the inertia.


Linux doesn't have enough marketshare for hardware companies to be very interested in it either,


I already posted a Phoronix article about the troubles Intel has gone through. A stable abi would mean less work for video card companies, end of story.

If having a stable API was that important, the distros would just freeze on a particular kernel/X/etc. version for 3 years while all the devs kept working on newer code that could change.


I've already gone over this. If a distro freezes the kernel then they run into a host of compatibility issues. For a desktop distro it is more trouble than it is worth. Then on top of it you have the subsplit problem. A distro that maintains a stable binary interface for video drivers won't matter much to gpu companies since most distros would still have the standard kernel.

Linus has designed Linux in a way that discourages forking and binary drivers. He doesn't care if Linux is a success on the desktop or even the server. It's a hobby kernel to him and the Linux desktop legions need to learn this and accept that at its core Linux is not designed to compete with Windows or OSX. Distros like Ubuntu aim for the desktop but have to continually deal with disruptive changes made upstream. It's a big mess but Linus prefers it that way. He is on record as stating that Linux is software evolution, not software engineering. If kernel changes break working hardware downstream that is all part of evolution. If Linux only gains success as a server and embedded OS then that is fine with him.

Reply Score: 2

Nth_Man Member since:
2010-05-16

"You can try to modify free software to your needs... and you can try to modify proprietary software to your needs, and see where we have our hands tied :-(


I really like how GPL advocates proselytize the basics [...]
"
Nt-JerkFace talked about freedom that "only exists to [...]" and it was answered telling where we all are free to do something and where we are not.

Reply Score: 1

RE[2]: Reverse engeneering sucks
by gilboa on Sat 15th Jan 2011 20:34 UTC in reply to "RE: Reverse engeneering sucks"
gilboa Member since:
2005-07-06

You would have better gpu drivers if Linus provided a stable ABI. ....
Because having a stable abi in a Nix is just unthinkable...like OSX, Solaris, oh wait nevermind. Where are all the benefits from the unstable abi? How has Linux leaped passed other Nix systems?


*Cough Bullshit *Cough.
As someone that actually maintains a fairly large out-of-tree kernel project (with >200K LOC), I find your comment to be misguided, at best.
Less than 1% of my team's time is spent on making the code compatible with upstream kernel.org release, and I'm using far more API's than your average graphics card driver. (Sockets, files, module management, etc)

- Gilboa

Edited 2011-01-15 20:35 UTC

Reply Score: 8

nt_jerkface Member since:
2009-08-26

You can try answering the same question.

How would have Linux been held back if they kept the abi on a three year cycle?

Reply Score: 3

RE[4]: Reverse engeneering sucks
by gilboa on Sun 16th Jan 2011 06:34 UTC in reply to "RE[3]: Reverse engeneering sucks"
gilboa Member since:
2005-07-06

You can try answering the same question.

How would have Linux been held back if they kept the abi on a three year cycle?


No idea.
From my -own- experience, I can't say that maintaining Windows kernel code with its own semi-stable ABI is any easier compared Linux. (Actually, the availability of the complete kernel source makes Linux far easier - at least in my view)

Getting back to the subject, you claimed that the lack of stable ABI is the main problem with writing good drivers, I claimed, from my own personal experience (that may or may not be relevant in the case of graphics card writers) that this is -not- the case.
Now, unless you some actual experience and/or evidence to prove your point, your initial argument is pure speculation.

- Gilboa

Reply Score: 2

nt_jerkface Member since:
2009-08-26


I can't say that maintaining Windows kernel code with its own semi-stable ABI is any easier compared Linux.


Define semi-stable.

Write a binary driver for Windows and it will work for the life of the system. Write one for Linux and it will likely be broken with the next kernel update.

Getting back to the subject, you claimed that the lack of stable ABI is the main problem with writing good drivers


No I didn't claim that. I claimed that Linux drivers would be better if it had a stable ABI. There is a difference. Hardware companies would producer higher quality drivers and in a more timely matter if there was a stable abi. This is partly due to IP issues and companies wanting to get drivers out on release day.

The Challenge In Delivering Open-Source GPU Drivers
http://www.phoronix.com/scan.php?page=news_item&px=ODk3MA

Microsoft should give Linus millions in stock for being so stubborn with binary drivers. It's a needless restriction that has held back the Linux desktop, especially during the XP days. That single decision has helped Windows keep its dominate position.

Reply Score: 2

RE[6]: Reverse engeneering sucks
by gilboa on Mon 17th Jan 2011 03:42 UTC in reply to "RE[5]: Reverse engeneering sucks"
gilboa Member since:
2005-07-06

Define semi-stable.

Write a binary driver for Windows and it will work for the life of the system.


In general it's true, but I had drivers getting broken by SP releases and between different classes of Windows. (E.g. XP vs 2K3).

Write one for Linux and it will likely be broken with the next kernel update.


Again, at least from my own experience, this is complete (!!!) bullshit.
ABI changes in the kernel are -few- and -far between-.
In the same fairly large kernel project mentioned above, we have 35 (!!!) LINUX_KERNEL_CODE required to support Linux 2.6.9 -> 2.6.35.
This means, that in-order to support all the kernels used from RHEL 4.0 till RHEL 6.0 and Fedora 14 (~6 years) we only had to make 35 adjustments or less than 6 changes a year.
... At the average of 10-60 minutes a change (and I'm exaggerating), we spent on average ~3 (!!!!) hours a year on keeping our project current.
Color me unimpressed.

No I didn't claim that. I claimed that Linux drivers would be better if it had a stable ABI. There is a difference. Hardware companies would producer higher quality drivers and in a more timely matter if there was a stable abi. This is partly due to IP issues and companies wanting to get drivers out on release day.


I beg to differ.
See the comment above.

The Challenge In Delivering Open-Source GPU Drivers
http://www.phoronix.com/scan.php?page=news_item&px=ODk3MA


As much as I enjoy reading Phoronix, in this particular case I wasn't too impressed.
Plus, see my comment below.

Microsoft should give Linus...


You're completely mixing Linux stable ABI (as in Linux kernel stable ABI) and Xorg and Mesa ABI.
Two completely different things.
(Plus, I have zero experience with latter, so I can't really comment on that...)

millions in stock for being so stubborn with binary drivers. It's a needless restriction that has held back the Linux desktop, especially during the XP days. That single decision has helped Windows keep its dominate position.


Wow, you're mixing so many different things I don't know where to start...
Binary drivers, stable ABI in kernel, stable ABI in Mesa and Xorg... you really making a salad over here.

I'll start by pointing out that nVidia (undoubtedly the best binary driver in Linux) is not really concerned by the lack of so called stable ABI [1].
I'll continue by pointing out that other OS, which do have a stable ABI (Solaris?) haven't fared better than Linux, quite on the contrary.

In short, thus far you didn't really provided any proof for your POV - not from personal experience and not from actual binary driver developers (see below).
Maybe it's time to reconsider?

- Gilboa
[1] http://www.phoronix.com/scan.php?page=article&item=nvidia_qa_linux&...
1) The lack of a stable API in the Linux kernel. This is not a large obstacle for us, though: the kernel interface layer of the NVIDIA kernel module is distributed as source code, and compiled at install time for the version and configuration of the kernel in use. This requires occasional maintenance to update for new kernel interface changes, but generally is not too much work.

Reply Score: 3

nt_jerkface Member since:
2009-08-26

In general it's true, but I had drivers getting broken by SP releases and between different classes of Windows. (E.g. XP vs 2K3).


Those are two different operating systems. XP users were not expected to upgrade to 2K3 while Linux users are expected to upgrade every 6 months.

This means, that in-order to support all the kernels used from RHEL 4.0 till RHEL 6.0 and Fedora 14 (~6 years) we only had to make 35 adjustments or less than 6 changes a year.


WorksForMe(tm).

What do you have to say to the millions of VMServer users that had their software broken numerous times from kernel changes? Tough shit?

You're completely mixing Linux stable ABI (as in Linux kernel stable ABI) and Xorg and Mesa ABI.


No I'm not, a stable ABI for video cards would reduce the total amount of work required for gpu companies which is extended into Xorg as seen by that article.

I'll start by pointing out that nVidia (undoubtedly the best binary driver in Linux) is not really concerned by the lack of so called stable ABI [1].


Cherry picking positive p.r. comments. Do you expect a major company like NVIDIA to come out and say that Linus is a stubborn asshole? Would a stable 3 year ABI be more or less work for NVIDIA and other hardware companies? Just answer that question. Oh and please don't claim that opening their specs would be the easiest route. AMD has already done this and now were have heard that there is a lack of open source driver developers.

I find it hilarious that the Linux defenders are so adamant about this issue. How dare I question the resounding success of Linux on the desktop. Linus and Greg KH have already stated that the kernel is a minefield for companies that want to release binary drivers. Year of The Desktop Linux would have happened years ago if the guy at the top was interested in meeting the needs of third parties like Nvidia that can help the success of alternative systems.

Oh and you still haven't answer my question, along with everyone else here. Show me what couldn't have waited 3 years.

Edited 2011-01-17 20:10 UTC

Reply Score: 2

RE[6]: Reverse engeneering sucks
by Nth_Man on Mon 17th Jan 2011 08:39 UTC in reply to "RE[5]: Reverse engeneering sucks"
Nth_Man Member since:
2010-05-16

Write one for Linux and it will likely be broken with the next kernel update.

That is not true, of course.

http://lxr.linux.no/#linux+v2.6.37/Documentation/stable_api_nonsens...
http://www.osnews.com/thread?458215

Microsoft should give Linus millions in stock

No comment :-)

Reply Score: 1

PlatformAgnostic Member since:
2006-01-02

Graphics is a whole nother kettle of fish. As far as I know, writing a graphics driver involves writing multiple high-quality JIT compilers, a memory management layer, and a bunch of difficult-to-debug libraries. Plus you need a minimal OS on the ASIC side. The statistic I heard (and believe) is that the NVidia driver on Windows contains more code than the all the other drivers on a typical system combined.

Reply Score: 2

RE[5]: Reverse engeneering sucks
by gilboa on Mon 17th Jan 2011 03:49 UTC in reply to "RE[4]: Reverse engeneering sucks"
gilboa Member since:
2005-07-06

As you pointed out, unlike, say a kernel based deep packet inspection software (ummm....) that's forced to use 70 different kernel APIs (from memory management to files, sockets, module management, assortment of contexts and memory spaces) a Video driver, such as as the nVidia driver is fairly light on kernel API's making it far less susceptible to kernel changes.
Most of the code (JIT, HW register management, etc) can easily be shared between Windows and Linux.

To quote nVidia [1] ~90% of their code is shared between Windows and Linux.

I'd estimate greater than 90% of the Linux driver is cross-platform code. The NVIDIA GPU software development team has made a very conscious effort to architect our driver code base to be cross-platform (for the relevant components). We try to abstract anything that needs to be operating system specific into thin interface layers.


- Gilboa
[1] http://www.phoronix.com/scan.php?page=article&item=nvidia_qa_linux&...

Edited 2011-01-17 03:50 UTC

Reply Score: 2

jacquouille
Member since:
2006-01-02

Another way that the title of this story is inaccurate is that we do have hardware acceleration on linux thanks to XRender --- and we have had for years.

So if your drivers have a good XRender implementation then your Firefox can blow the competition into orbit in 2D graphics benchmarks such as:
http://ie.microsoft.com/testdrive/Performance/PsychedelicBrowsing/D...

What's blacklisted on buggy X drivers is OpenGL. It is used for WebGL, and for accelerated compositing of the graphics layers in web pages.

However, for the latter (compositing), we are still working on resolving performance issues in the interaction with XRender, and that won't make it into Firefox 4, so we don't enable accelerated compositing by default (regardless of the driver blacklist) so if you want accelerated compositing (at the risk of losing the benefit of XRender) you have to go to about:config and set

layers.acceleration.force-enabled

to true.I'm happily using it here, and it can double the performance in fullscreen WebGL demos.

Reply Score: 4

No it isnt Member since:
2005-11-14

Interesting test. With Chrome/Chromium (same version), it's faster in Linux with both fglrx and Gallium3d than in Windows7. Not much between fglrx and Gallium3d. With Firefox 4 Beta9, fglrx is slower than Chrome (but faster than Firefox 3.6 Linux and Windows), whereas both Windows and Linux w/Gallium3d are very fast indeed. Which is to say that the Gallium3d Radeon 5xx0 driver finally does something very well.

Reply Score: 2

so.....
by pabloski on Sat 15th Jan 2011 18:25 UTC
pabloski
Member since:
2009-09-28

...after all we really need gallium and wayland on linux ;)

Reply Score: 1

RE: so.....
by jabjoe on Sat 15th Jan 2011 19:02 UTC in reply to "so....."
jabjoe Member since:
2009-05-06

Gallium helps X too. Means X has a single driver for Gallium3D/KMS/DRM that works on multiple cards. Removing drivers from X will greatly help X as it will mean much less code and make changing things much easier. It doesn't just make X alternatives possible. Everyone is a winner.

Reply Score: 3

Mozilla may end up with egg on their face
by jabjoe on Sat 15th Jan 2011 19:05 UTC
jabjoe
Member since:
2009-05-06

It's open source, someone else can do it if they can't. There is a graphics drivers problem, but it's getting much better and the future is bright (gallium3d and friends), but even with what we have now, many many many applications manage to do OpenGL just fine on X (even with the crappy closed NVidia drivers I must run, that crash X about once a month). They will look like fools if some else does a fork of FireFox with working OpenGL. My guess is that this will be what will happen because they are effectively throwing down the gauntlet. If this does happen, it will be the soul purpose of the fork and Mozillia will probably quietly take the code, grumbling under their breath. ;-)

Reply Score: 2

tuma324 Member since:
2010-04-09

It's open source, someone else can do it if they can't. There is a graphics drivers problem, but it's getting much better and the future is bright (gallium3d and friends), but even with what we have now, many many many applications manage to do OpenGL just fine on X (even with the crappy closed NVidia drivers I must run, that crash X about once a month). They will look like fools if some else does a fork of FireFox with working OpenGL. My guess is that this will be what will happen because they are effectively throwing down the gauntlet. If this does happen, it will be the soul purpose of the fork and Mozillia will probably quietly take the code, grumbling under their breath. ;-)


pffft... the fast that they can't do this as easily as they can with other platforms (Windows, Mac) already tells a lot about Linux and X.`

Reply Score: 1

jabjoe Member since:
2009-05-06

Others don't make such a fuss and manage. My old OpenGL stuff just works.

Reply Score: 2

tuma324 Member since:
2010-04-09

Others don't make such a fuss and manage. My old OpenGL stuff just works.


That's true too

Reply Score: 1

Neolander Member since:
2010-03-08

It's open source, someone else can do it if they can't. There is a graphics drivers problem, but it's getting much better and the future is bright (gallium3d and friends), but even with what we have now, many many many applications manage to do OpenGL just fine on X (even with the crappy closed NVidia drivers I must run, that crash X about once a month). They will look like fools if some else does a fork of FireFox with working OpenGL. My guess is that this will be what will happen because they are effectively throwing down the gauntlet. If this does happen, it will be the soul purpose of the fork and Mozillia will probably quietly take the code, grumbling under their breath. ;-)

Well, I tried the webGL test suite linked earlier, bypassing the Intel driver blacklisting.

It crashed Firefox.

If some simple tests supplied by WebGL's vendor can already lead to this result, I agree that WebGL should not be enabled by default for this chipset. As jacquouille said, it's too much of a security risk.

Reply Score: 1

jabjoe Member since:
2009-05-06

I don't doubt there is a problem with it, what I'm saying is others manage. Worse case, do some message blaming the graphics card drivers. ;-)

Reply Score: 2

Neolander Member since:
2010-03-08

So they would have to put workarounds for every bug they find in a platform-specific graphics driver, right in a multiplatform web browser ?

Sounds out of place to me. Well, sure, it is doable, but I understand their decision not to do it.

Reply Score: 1

jabjoe Member since:
2009-05-06

No, do like they do with plugins, separate process. Let it crash, if/when it crashes say it's probably the graphics drivers fault >insert card name here<. With the open drivers, some one will try and fix it, with the closed, well lets hope they care enough about last year's device.

Or, you know, look at the source of things that manage just fine....

Reply Score: 2

Neolander Member since:
2010-03-08

But things which manage just fine only use a subset of the OpenGL API. As jacquouille said, the goal of WebGL is to put 90% of said API in the hand of scripts, without knowing which parts of it said scripts will use...

Unless you advocate supporting only a subset of WebGL, the part which doesn't crash on the currently used drivers. Then we simply don't agree. We've had too much partial web standard support in the past, I think.

Edited 2011-01-16 10:04 UTC

Reply Score: 1

jabjoe Member since:
2009-05-06

Wine is one of the things that manage, and for OpenGL it will probably do very little bar pass it on. But the DX implimentation is more complex OpenGL. Crashes in Wine are normally because of the nature of it, i.e reimplimenting a bag of closed APIs to run closed programs that use those APIs, not due to graphics drivers. That what I think anyway, don't know of any real data on this.

Reply Score: 2

Misleading (troll-y) article?
by fithisux on Sat 15th Jan 2011 19:10 UTC
fithisux
Member since:
2006-01-22

Xorg drivers are buggy. Yes ... sure. Buggy are the crap called GFX that are

1. Not standardized
2. Under documented

If the GFX had a hardware standard for access, more people could improve the OpenGL stack (coincidentally this happens with Microsoft. however vendors write blobs to standardize to their interfaces) . We live in 2010 and graphics cards after all the technological advancements could not export a common hardware access API. It makes me wonder about the author's unfair and inaccurate description of the GFX situation in general. Why not have HW accelerated browsers in Haiku or Syllable. Because people would be involved in an eternal hunt for documentation. The only answer is standards. There enough FPGAs out there to burn a standard driver in it. If you want my 2 cents

1. GFX should handle interface to the monitor and do elementary 2D acceleration via a standardized way (Have you heard about VESA?)
2. 3D/GP computing should be re-factored to another chipset (APU by AMD is a good terminology) that could be put on PCIE card to provide standardized access. Put an FPGA to do the translation from standardized calls to vendor HW.

For example I buy a cheap standard 2D gfx card and a standardized accelerator board cheaper because it is more oriented on GP computing and weaker in 3D (I want solve differential equation with octave on FreeBSD for example ). If you want to be cheaper, buy only the first, let you 8core CPU do the rest.

So, we could have 2 markets, cheap 2D standardized cards, like OHCI, OHCI1394, PCI-ATA cards and accelerator / co-processor cards that should be also standardized. Less unemployment.

What we have now? Everything combined in a proprietary, non standard compliant uncompetitive manner and older vendors killed. OSS is part of the global market and making drivers for special OSes is uncompetitive. Even the mighty windows need a vendor driver.

There is always the cheapness factor. But would you sacrifice price for freedom and standards compliance? If yes, then , in my opinion, computing is not for you.

Reply Score: 6

RE: Misleading (troll-y) article?
by siride on Sun 16th Jan 2011 04:21 UTC in reply to "Misleading (troll-y) article?"
siride Member since:
2006-01-02

This sounds like a recipe to make everything slow and lowest common denominator.

Reply Score: 2

fithisux Member since:
2006-01-22

This sounds like a recipe to make everything slow and lowest common denominator.


This sounds like FUD.

Reply Score: 2

siride Member since:
2006-01-02

Fear? No

Uncertainty? No

Doubt? well, maybe

The field of graphics cards is advancing at a rapid piece. Bridling it with some committee-derived standard would be extremely hurtful to the companies involved and mostly unnecessary anyway. They already provide drivers for the platforms that matter and since they can control the card and the driver, they can develop at a much faster pace.

By the way, there already is a standard interface and it's called OpenGL. DirectX would count too. Adding yet another layer is just bloat and unnecessary.

Reply Score: 2

fithisux Member since:
2006-01-22

By the way, there already is a standard interface and it's called OpenGL. DirectX would count too. Adding yet another layer is just bloat and unnecessary.


IEEE1394 is slow and least common denominator?

I suggest DirectX or OpenGL to be burnt on the GFX card.

Reply Score: 2

vodoomoth Member since:
2010-03-30

err, no. Supporting a standard API doesn't magically make your hardware less capable. It's not a feature superset, it's such that what the standard requires is a subset of the features. A good standard should provide an extension point for specific/proprietary features and a means for probing capabilities.
I'm not a gamer but I guess there are several cards from distinct manufacturers that support DirectX 10. Are all cards incapable of doing anything that isn't in the DX10 api? I doubt it.

Reply Score: 2

Not working under Windows either
by WereCatf on Sat 15th Jan 2011 21:13 UTC
WereCatf
Member since:
2006-02-15

I just tried beta 9 and well, either I get a completely black window or it flickers like Speedy Gonzales having an epileptic seizure. Not really what I would call useable :/

Reply Score: 3

v Another shot at X's ass
by Jason Bourne on Sat 15th Jan 2011 23:15 UTC
RE: Another shot at X's ass
by jacquouille on Sat 15th Jan 2011 23:24 UTC in reply to "Another shot at X's ass"
jacquouille Member since:
2006-01-02

Actually, Xorg developers have spontaneously contacted us and are looking into the driver issues we're having (which they could reproduce). Looking forward to un-blacklisting stuff in the future.

Reply Score: 3

RE[2]: Another shot at X's ass
by Thom_Holwerda on Sat 15th Jan 2011 23:32 UTC in reply to "RE: Another shot at X's ass"
Thom_Holwerda Member since:
2005-06-29

...because of this article? Or did they contact because of something else? I can't really see OSNews having this kind of influence :/.

In any case, that sounds like good news!

Reply Score: 2

RE[3]: Another shot at X's ass
by jacquouille on Sun 16th Jan 2011 00:24 UTC in reply to "RE[2]: Another shot at X's ass"
jacquouille Member since:
2006-01-02

I don't know if it's because of this article, or because of the article that Phoronix is currently running on the same topic, or because of the various blog posts flying around the interwebs :-)

And yes, it's good news :-)

Edited 2011-01-16 00:26 UTC

Reply Score: 2

Comment by kaiwai
by kaiwai on Sun 16th Jan 2011 02:20 UTC
kaiwai
Member since:
2005-07-06

"Sadly enough, GL drivers on Windows aren't that great either," he notes, "This is why WebGL is done via Direct3D on Windows now... But that mostly a matter of performance issues."


Colour me confused by why is it a bad thing that WebGL is implemented on top of Direct3D instead of OpenGL? if the outcome is consistent with WebGL implemented using OpenGL then why is it even a problem? I mean if the outcome is the same then why is it a 'sad situation'?

Reply Score: 2

RE: Comment by kaiwai
by tyrione on Sun 16th Jan 2011 02:50 UTC in reply to "Comment by kaiwai"
tyrione Member since:
2005-11-21

""Sadly enough, GL drivers on Windows aren't that great either," he notes, "This is why WebGL is done via Direct3D on Windows now... But that mostly a matter of performance issues."


Colour me confused by why is it a bad thing that WebGL is implemented on top of Direct3D instead of OpenGL? if the outcome is consistent with WebGL implemented using OpenGL then why is it even a problem? I mean if the outcome is the same then why is it a 'sad situation'?
"

WebGL is based on OpenGL ES 2.0. All smartphones, outside of the Windows world are OpenGL ES 2.0 compliant.

It should be obvious you want WebGL as a layer abstracted from OpenGL ES 2.0.

Reply Score: 2

RE[2]: Comment by kaiwai
by kaiwai on Sun 16th Jan 2011 02:56 UTC in reply to "RE: Comment by kaiwai"
kaiwai Member since:
2005-07-06

WebGL is based on OpenGL ES 2.0. All smartphones, outside of the Windows world are OpenGL ES 2.0 compliant.

It should be obvious you want WebGL as a layer abstracted from OpenGL ES 2.0.


That makes absolutely no sense what so ever - the issue is layering WebGL on top of Direct3D and for the programmer who is programming for WebGL he doesn't care what happens under the hood and behind the scenes because all he is concerned about is the fact that WebGL is provided. If the WebGL on top of Direct3D implement the whole WebGL stack, a programmer can programme against WebGL and it runs on Windows, MacOS X and Linux regardless of what the back end is then the whole commotion is for nothing other than the for the sake of drama.

I think the whole sadness has to do with the fact that they have to maintain two separate back ends instead of a single one. Sorry to sound pathetic but boo-f--king-whoo. Its time that the Firefox developers stop writing their code for the lowest common denominator and started taking advantage of the features which operating systems expose to developers. That apparently they're ok to use Direct3D/Direct2D/DirectWrite but maintaining an extra backend to WebGL is apparently 'one step too far'? good lord. There is a reason I refuse to use Firefox on Mac OS X.

Edited 2011-01-16 02:58 UTC

Reply Score: 3

RE[3]: Comment by kaiwai
by Neolander on Sun 16th Jan 2011 12:14 UTC in reply to "RE[2]: Comment by kaiwai"
Neolander Member since:
2010-03-08

Performance is an issue, too. Having WebGL code translated to Direct3D on Windows is akin to DirectX-based Windows programs running on top of Wine, which get all their Direct3D calls translated to OpenGL.

Sure, those programs don't know about it, but the call translation overhead results in very poor performance in the end. And THAT they care about.

Edited 2011-01-16 12:15 UTC

Reply Score: 2

RE[4]: Comment by kaiwai
by moondevil on Mon 17th Jan 2011 08:08 UTC in reply to "RE[3]: Comment by kaiwai"
moondevil Member since:
2005-07-08

The problem is, that some OpenGL drivers are so bad, that doing WebGL on top of DirectX is still more stable and faster than using OpenGL.

Not everyone has an ATI or NVidia graphics card, and even them have issues with OpenGL.

Reply Score: 2

RE[3]: Comment by kaiwai
by tyrione on Tue 18th Jan 2011 19:01 UTC in reply to "RE[2]: Comment by kaiwai"
tyrione Member since:
2005-11-21

Care to respond to the obvious the other poster pointed out between translating graphics compositing engines and poor performance in that translation?

You really needed that explained to you?

Reply Score: 2

What?? WebGL through Direct3D?
by shmerl on Sun 16th Jan 2011 15:56 UTC
shmerl
Member since:
2010-06-08

ATI is partially to blame for bad OpenGL drivers on Windows (since ATI cards are quite widespread). They never invested the same effort as into DirectX drivers. Nvidia on the other hand produces decent OpenGL drivers across all platforms.

Still, this whole situation is a mess.

Reply Score: 2

Heh
by Wodenhelm on Sun 16th Jan 2011 23:15 UTC
Wodenhelm
Member since:
2010-07-16

So where's all the X-lovers now? Wayland brings forth his hammer.

Reply Score: 2

RE: Heh
by Mellin on Mon 17th Jan 2011 21:12 UTC in reply to "Heh"
Mellin Member since:
2005-07-06

wayland is'n ready for use by normal people


wayland isn't supported by nvidia and ati

Reply Score: 2

RE: Heh
by shmerl on Wed 19th Jan 2011 02:48 UTC in reply to "Heh"
shmerl Member since:
2010-06-08

Wayland is no better if underlying OpenGL is all screwed up.

Reply Score: 1

slashdev
Member since:
2006-05-14

I was reading that the DirectX 10/11 API was successfully ported to Linux, though not 100% done at the time i read it, but it was already running Direct3D demos and such.

Ah yes, here:
http://www.phoronix.com/scan.php?page=article&item=mesa_gallium3d_d...


If it becomes a little more mature, we could see Directx becoming an alternative to OpenGL on non-windows systems...(kinda shows the sad state of opengl right there...)

Reply Score: 2

lemur2 Member since:
2007-02-17

If it becomes a little more mature, we could see Directx becoming an alternative to OpenGL on non-windows systems...(kinda shows the sad state of opengl right there...)


OpenGL support for Firefox will likely arrive well before it can be done by the DirectX state trackers.

http://www.phoronix.com/scan.php?page=news_item&px=OTAyMA

Reply Score: 2

Switching
by Mellin on Mon 17th Jan 2011 21:09 UTC
Mellin
Member since:
2005-07-06

i'm going to switch to chromium

Reply Score: 2

Maybe ready for Firefox 4.1
by lemur2 on Mon 17th Jan 2011 23:37 UTC
lemur2
Member since:
2007-02-17

http://www.phoronix.com/scan.php?page=news_item&px=OTAyMA

Ideally these open-source developers will be able to get the WebGL issues on Mesa straightened out quickly. However, it already would be too late to get them fixed and then white-listed for Firefox 4.0. Mesa 7.10.1 / Mesa 7.11 will likely not be out for a couple of months and if these next releases do carry the WebGL fixes, for most users it's then a matter of waiting for the distribution vendors to pick-up the new packages. Maybe in time for Mozilla Firefox 4.1 these Linux GPU acceleration issues will be sorted out.


Finally, some movement towards OpenGL 3.0 support in Mesa:
http://www.phoronix.com/scan.php?page=news_item&px=OTAyMQ

Reply Score: 2

Well, well, well...
by 1c3d0g on Tue 18th Jan 2011 17:02 UTC
1c3d0g
Member since:
2005-07-06

...*clears throat*...ahem, so isn't this going to push Wayland developers to move even faster so Linux can' finally have a proper graphics server? About damn time they retire that stupid kludge of a software called X.

Reply Score: 2