Linked by Thom Holwerda on Sun 10th Oct 2010 14:17 UTC, submitted by Extend
Ubuntu, Kubuntu, Xubuntu Yes, yes, it's that time of the year again - a new Fiona Apple album confirmed (which makes anything that happens between now and spring 2011 irrelevant and annoying), MorphOS 2.6 released (will be the next news item), and, of course, a new Ubuntu release showcasing the best of the best that the Free software world has to offer in the desktop world.
Thread beginning with comment 444797
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[7]: Solid release
by Neolander on Mon 11th Oct 2010 17:52 UTC in reply to "RE[6]: Solid release"
Neolander
Member since:
2010-03-08

It makes sense for a lot of effects to be handled by gpu. If the gpu already knows how to process effects like fading, shadow and transparency then the most efficient code is to pass it off. You do not want to needlessly waste cpu cycles doing transparency calculations.

What are you calling "efficient" exactly ? On battery-powered computers (and even on the desktop, with today's environmental concerns), consuming little power is a very desirable form of efficiency.

CPUs have a relatively small power consumption, and are needed by every software anyway. On the other hand, a GPU consumes easily 3x as much as a CPU when under load. Is it really efficient to have the computer's power consumption go x4 for the sake of drawing unreadable translucent windows ?

Now, let's examine what you're mentioning as GPU use cases.
-Small fading effects are not very intensive and they occur only once in a while so they can be handled by the CPU with no major performance hit for applications. I disagree with the alleged need for a GPU there
-Translucent things, on the other hand, eat up power ALL THE TIME. Anytime you open a menu, anytime you open a window, you have to redraw the whole layer stack when translucency is on. So in that case, you're right, a GPU is welcome (even though not needed. Again, look at E17's shadows. And keep in mind that applications are perfectly responsive on top of it, more than on Gnome+Compiz in fact)

But now, let's consider what that translucency is used for :
-Windows 7's unreadable windows borders and task bar (your mileage may vary depending on your wallpaper's colors but in my case the result was so awful I just had to disable it and mentally thank Microsoft engineers for providing the option to do so, although they could just as well have made the Basic theme less crappy).
-OSX's unreadable menu bars and distracting menus.
-Shadows that no one except geeks will ever notice.

See where I'm going ? When transparent effects are used, it's either to hurt usability or to go unnoticed except for the much reduced battery life it leads to. I've yet to see a case where transparency is used wisely on a GUI.

Now, please not that I'm not one of these spartans who want every single OS to look as dull as RISC OS' GUI. I sure love nice-looking UIs a lot, and am not against the use of special effect as long as it's done properly. As horrified as I was by Windows XP's "fisher price my first operating system" look, I'm fond of the vista-7 look as soon as a few things are done to make it look better, be more usable, and work in a smoother fashion (e.g. disabling window and taskbar translucency). I love to have those overused shiny gradients on my buttons and scrollbars, and think that the progress bars especially are quite nicely done. And I love the right to have my windows painted in any color I like, too (that's why I don't switch to the Basic theme, by the way, since MS recently decided that changing a window's color was highly computationally expensive and required a GPU to be done properly).

But really, you don't need a GPU to do all that. It's basic use of gradient and animations, with a few fading here and there. A lot of them can be rendered in advance and just blitted on screen as needed, and the rest can be rendered in real-time with little to no performance hit (and no responsiveness hit at all if you know how to do scheduling).

On my laptop, I have that dual-gpu thing called Optimus by NVidia. You can save an hour of battery by switching to the intel GPU for most work, and going back to the nvidia GPU when needed. No, each time I use it, I think that I'd rather take an extra hour of battery by not using GPU rendering, since as I just showed it's not needed. How is that wrong ?

Edited 2010-10-11 17:55 UTC

Reply Parent Score: 4

RE[8]: Solid release
by nt_jerkface on Tue 12th Oct 2010 02:49 in reply to "RE[7]: Solid release"
nt_jerkface Member since:
2009-08-26

CPUs have a relatively small power consumption, and are needed by every software anyway. On the other hand, a GPU consumes easily 3x as much as a CPU when under load. Is it really efficient to have the computer's power consumption go x4 for the sake of drawing unreadable translucent windows ?


A GPU can handle 2D and 3D drawing with greater efficiency than a general purpose CPU due to specialization. GPUs do not have to have the same overhead required in modern x64 cpu. If you think fancy effects are a waste of power then that is a completely different subject.


But really, you don't need a GPU to do all that. It's basic use of gradient and animations, with a few fading here and there.


You don't need a GPU to do 3D drawing either but it sure helps since it is a cpu built for that purpose.

In Win7 Aero stays on in power saving mode but transparency is disabled since that is what takes the most power. It doesn't have to be an either/or solution. The gpu should at the very least be taken advantage of if the computer is plugged in.

I'm not sure why you think E17 is the answer when it looks dated compared to KDE 4.5 and is developed at a glacial pace. Sure it looks better than Gnome but that isn't saying much. I also don't think you could push a return to cpu-only effects when the browsers are moving towards gpu rendering. It makes more sense to have a windowing system feed the gpu frames just as with a 3D game, especially when open source desktops have limited developers. They shouldn't be bothered with optimizing fade or transparency routines when the gpu already knows how to do them.

Reply Parent Score: 4

RE[9]: Solid release
by Neolander on Tue 12th Oct 2010 06:13 in reply to "RE[8]: Solid release"
Neolander Member since:
2010-03-08

A GPU can handle 2D and 3D drawing with greater efficiency than a general purpose CPU due to specialization. GPUs do not have to have the same overhead required in modern x64 cpu. If you think fancy effects are a waste of power then that is a completely different subject.

You don't need a GPU to do 3D drawing either but it sure helps since it is a cpu built for that purpose.

The right tool for the right job. A GPU can crunch more numbers for heavy vector and texture calculation, yes. That's why we use it in areas which need that, like some games, graphic applications, and recently scientific calculation software. What I question is the need for it joyfully wasting electrical power and heating up everything in something as basic as UI rendering. Take the average IE/Word user, wouldn't he rather take battery life and laptop hardware lasting longer ?

In Win7 Aero stays on in power saving mode but transparency is disabled since that is what takes the most power. It doesn't have to be an either/or solution. The gpu should at the very least be taken advantage of if the computer is plugged in.

Yes, plugging the computer in solves the power consumption problem. But then, something else comes : heat and the reduced hardware lifetime that comes from it. If we take a Word/IE user who doesn't play any other game than Solitaire, we have to wonder : wouldn't he rather take much cheaper hardware without GPU acceleration at all ? Hardware which despite costing less would last longer, because contrary to most cheap hardware it would not overheat that easily.

I'm not sure why you think E17 is the answer when it looks dated compared to KDE 4.5 and is developed at a glacial pace. Sure it looks better than Gnome but that isn't saying much.

It's not about E17's look (E17 is horrible as far as look is concerned, we agree, although some themes look quite better than the default one). I'm talking about rendering technology here.

E17 shows what software rendering is capable of without a major performance hit, and is used here to silence people who blame things like the horrible performance of Metacity and software KWin on the limited capabilities of the CPU, and ask for a GPU when it's not needed. Simply considering that the early Macintosh already managed to run a GUI shows how much modern CPUs are really up to, when used properly.

As time passes, people who want to use Word and browse simple websites (webmails, news...) have been asked to buy more and more powerful hardware to do the same very thing, or otherwise suffer a crappy performance. Sometimes it was justified (like, say, when memory protection was introduced), most of the time it's not. What I'm against is unjustified power waste.

If my OS manufacturer tells me "I'm introducing a new capability-based security system that's more safe and efficient for desktop use than the current user/admin model, but it's a bit power-intensive because of the need to parse complex databases so minimal specs will rise a bit", I say "please do". If he tells me "Look, I don't know what to do in this release so I'm going to introduce extremely power-intensive transparency algorithms in the UI. It won't be any easier to use, and arguably will look bad, but you now need a supported GPU in the minimal specs and your battery life will drop by 20%", then I'm just going to slap him in the face and go look for a more serious OS vendor, rightfully in my opinion.

I also don't think you could push a return to cpu-only effects when the browsers are moving towards gpu rendering.

Again, it's a matter of using the right tool for the right job. I'm all for web browsers getting similar capabilities as Flash as far as gaming in concerned, but for my daily web browsing I'd prefer the GPU to stay off. What I'm against in GPU rendering in browsers is when you have to keep it enabled all the time, because the "fallback" software algorithms are just as horrible-looking as the Basic theme of Vista and Windows 7, the developers having not even bothered to test them for readability apparently. If people want their computer to last 2 hours on battery when browsing the web, it's fine, but in my case no thanks.

It makes more sense to have a windowing system feed the gpu frames just as with a 3D game,

Debatable.

especially when open source desktops have limited developers. They shouldn't be bothered with optimizing fade or transparency routines when the gpu already knows how to do them.

Look, to me this looks like a good example of wasted resources, in development this time. If the open source desktop has enough development resources to make KDE 4, Pulse Audio, and Compiz, it has enough resource to make E17 a mature windows manager and integrate it in all desktop environments.

It's funny you mention the open source desktop, because I thought about mentioning it as an example of why putting a GPU everywhere is bad.
GPUs are the perfect example of effort duplication. Hardware manufacturers couldn't bother to think about standard interfaces for software in their little holy war, so you almost have to make drivers on a per-chipset basis. The open source desktop is not a very interesting target for them, and to make things worse X11/Xorg is so horrible that its devs feel an urge to break it every few months, so the whole HW accelerated graphic stack of the open source desktop is horrible. Fixing it would require dozens of years of hard work.

And you want to make vital software like UI rendering and web browsers accelerated, knowing that software rendering will never be enough tested because devs will think "ha, users all have a working accelerated graphic stack anyway" ?

This is, in my opinion, just perfectly wrong.

Edited 2010-10-12 06:15 UTC

Reply Parent Score: 2