Linked by Thom Holwerda on Mon 9th May 2011 21:14 UTC, submitted by Elv13
Qt Since Nokia announced its switch to Windows Phone 7, people have been worried about the future of Qt. Well, it turns out Nokia is still going full steam ahead with Qt, since it has just announced the plans for Qt 5. Some major changes are afoot code and functionality-wise, but the biggest change is that Qt 5 will be developed out in the open from day one (unlike Qt 4). There will be no distinction between a Nokia developer or third party developer.
Thread beginning with comment 472457
To read all comments associated with this story, please click here.
Meh
by Neolander on Tue 10th May 2011 05:57 UTC
Neolander
Member since:
2010-03-08

GPU acceleration required + more javascript = a big meh, as far as I'm concerned.

There are those who, like Haiku and Enlightenment, choose to optimize their graphic stack enough that it doesn't require a gaming-grade GPU for smoothly drawing gradients and buttons. And there are those who just give up, use a slow web scripting language everywhere in the UI for optimal responsiveness, and turn on the battery hog that a GPU is for trivialities. Sadly, more and more organisms put themselves in the second category...

Edited 2011-05-10 06:04 UTC

Reply Score: 1

RE: Meh
by makc on Tue 10th May 2011 06:25 in reply to "Meh"
makc Member since:
2006-01-11

Haiku's app_server optimized? Don't shout it too loud...
And mind, I love Haiku, as much hacked together as it is.

Reply Parent Score: 4

RE: Meh
by moondevil on Tue 10th May 2011 07:25 in reply to "Meh"
moondevil Member since:
2005-07-08

As I already told you in another threads, in the IT world VM based environments + JIT are more valuable than the previous all native solutions.

The hardware allows it, and people prefer the productivity gains to the difficulty to deal with low level APIs.

Surely you can also create high level APIs in native languages, but no one seems to care about it.

It does not mean I agree with it, but it is the reality, and I doubt any of us will be able to change it.

Reply Parent Score: 2

RE[2]: Meh
by vivainio on Tue 10th May 2011 07:34 in reply to "RE: Meh"
vivainio Member since:
2008-12-26


The hardware allows it, and people prefer the productivity gains to the difficulty to deal with low level APIs.

Surely you can also create high level APIs in native languages, but no one seems to care about it.


QML sort of lets you have your cake and eat it too. You have full access to the native platform on "metal" level (C++ & Qt), while allowing you to write as much of the code in "scripted" environment on QML side.

It's a much neater concept than e.g. with C# / Silverlight, where you are forced to write almost everything in C# - and C#/CLR is still not "low level enough" when you really want that (to conserve RAM, hand tuning the algorithms to optimize cpu cache use, whatever).

Reply Parent Score: 3

RE: Meh
by vivainio on Tue 10th May 2011 07:30 in reply to "Meh"
vivainio Member since:
2008-12-26

There are those who, like Haiku and Enlightenment, choose to optimize their graphic stack enough that it doesn't require a gaming-grade GPU for smoothly drawing gradients and buttons.


You should brush up on how graphics works these days. It's not at all about "optimizing" - it's about creating a graphics model that works as fast as possible on modern hardware, instead of "generic" model that works okay everywhere.

And there are those who just give up, use a slow web scripting language everywhere in the UI for optimal responsiveness, and turn on the battery hog that a GPU is for trivialities. Sadly, more and more organisms put themselves in the second category...


GPU is always on anyway. You can waste it and burn the power on CPU instead, or get with the program and use that GPU. Using GPU over CPU prolongs battery life and makes the applications look better.

Reply Parent Score: 3

RE[2]: Meh
by Neolander on Tue 10th May 2011 20:26 in reply to "RE: Meh"
Neolander Member since:
2010-03-08

You should brush up on how graphics works these days. It's not at all about "optimizing" - it's about creating a graphics model that works as fast as possible on modern hardware, instead of "generic" model that works okay everywhere.

Speed does matter, but it's not the only thing that matters. Things like battery life, stability and portability do matter to, in the context of QT. Sadly, some of the platforms which QT does run on simply do not have stable, efficient GPU drivers for recent hardware, because of the way the GPU ecosystem works (particularly on the desktop). In fact, considering how easy it is to crash my GPU drivers on Windows, and this driver update which broke something as basic as fan operation on high-end graphic cards some time ago, I have to wonder is such a thing as a reliable GPU driver for modern hardware does exist on any platform...

GPU is always on anyway. You can waste it and burn the power on CPU instead, or get with the program and use that GPU. Using GPU over CPU prolongs battery life and makes the applications look better.

I don't think it's impossible to turn off a GPU. I think NVidia's Optimus software does it, and someone (I think it's oahiom) told me that on AMD and Intel GPU, there even is a fine-grained power management model allowing one to shut down parts of the GPU (e.g. individual execution units).

While if you can't turn GPU off, the most efficient thing to do is indeed to use them, turning off GPUs, or parts of them, is totally worth it when CPU power is sufficient. I haven't made a pure comparison of software rendering with all GPUs off (or in their minimal power state) and GPU-accelerated rendering yet, but I can already tell what the difference is between an idle Intel GPU and an idle NVidia GPU : the battery life of my laptop is halved in the latter case. This is a pretty impressive result, because it means that even if we neglect the power consumption of the Intel IGP, when it does nearly nothing, an NVidia GPU eats up as much power as all other power-hungry components of a modern laptop (screen, HDD, wireless, CPU...) combined !

A more debatable example would be a comparison between smartphones with and without GPUs : those without GPUs generally provide pretty smooth graphics too. You lose the shiny effect, granted, but you can also get hardware at half the price but with twice the battery life (3 days vs. 1,5 days).

What this shows is that while CPUs' power management features have dramatically improved in the past few years, high-end GPUs are still the power-hungry beasts that they were in the days of graphics accelerators : something suitable for gaming or high-end graphics, but not for anything which has a battery plugged in it. When I see everyone embracing them for trivial tasks, I do think that the computing world has really gone completely mad.

Reply Parent Score: 1

RE: Meh
by _txf_ on Tue 10th May 2011 09:33 in reply to "Meh"
_txf_ Member since:
2008-03-17

GPU acceleration required + more javascript = a big meh, as far as I'm concerned.


Why have square wheels when you can have round ones?

That is essentially the logic that you're proposing. It is more energetically efficient to use asics designed for the task at hand instead of throwing generic compute muscle at a problem, why is graphics any different?

If most machines include dedicated hardware for graphics why not optimise for the majority?

Last but not least...since when have any accelerated uis needed Gaming level gpus?

Reply Parent Score: 4

RE: Meh
by drahca on Tue 10th May 2011 11:39 in reply to "Meh"
drahca Member since:
2006-02-23

GPU acceleration required + more javascript = a big meh, as far as I'm concerned.


We are not talking gaming grade GPUs here. All CPUs will have GPUs integrated into them in the near future. AMD has its Fusion program, Intel already ships Sandy Bridge with an integrated GPU and all ARM chips in mobile phones have a GPU on die. So if you have this piece of sillicon already which can do these drawing operations more efficiently than a CPU can, why not use it?

Also GPUs require a different drawing model than traditional CPU based renderers do. Compositing window managers manage off-screen buffers which they use for compositing the desktop, while CPU based renderers just write more or less directly into the screen buffer. While this "deferred rendering" requires more memory than "immediate mode", it is actually more efficient when moving windows around because applications do not have to repaint all the time.

Enlightenment uses such a deferred approach and can use a software rasterizer but it can also use the GPU via OpenGL. Haiku is left in the 90s for now.

Basing the drawing model on a GPU friendly paradigm and OpenGL does of course not mean you cannot run it on a CPU. There are already numerous ways to accelerate OpenGL on the CPU for backwards compatibility such as LLVMPipe.

Reply Parent Score: 5