Linked by Nth_Man on Mon 1st Jul 2013 15:37 UTC
Linux "This release adds support for bcache, which allows to use SSD devices to cache data from other block devices; a Btrfs format improvement that makes the tree dedicated to store extent information 30-35% smaller; support for XFS metadata checksums and self-describing metadata, timer free multitasking for applications running alone in a CPU, SysV IPC and rwlock scalability improvements, the TCP Tail loss probe algorithm that reduces tail latency of short transactions, KVM virtualization support in the MIPS architecture, many new drivers and small improvements."
Order by: Score:
Link borked
by Ithamar on Mon 1st Jul 2013 16:16 UTC
Ithamar
Member since:
2006-03-20

There's a " in the link. Correct link is http://kernelnewbies.org/LinuxChanges .....

Reply Score: 3

Next release
by zima on Mon 1st Jul 2013 20:18 UTC
zima
Member since:
2005-07-06

Will the release notes compare it to Windows 3.11? ;)

Reply Score: 4

Comment by Luminair
by Luminair on Mon 1st Jul 2013 20:58 UTC
Luminair
Member since:
2007-03-30

cfg80211: Extend support for IEEE 802.11r Fast BSS Transition

fast bss transition is taking way too long to become mainstream, if you ask me

Reply Score: 2

Radeon graphics driver improvements
by lemur2 on Tue 2nd Jul 2013 06:53 UTC
lemur2
Member since:
2007-02-17

Of personal interest to me, the open source Radeon graphics driver in the Linux 3.10 kernel now offers interfaces for interacting with the Unified Video Decoder (UVD) hardware on Radeon HD 4000 and later HD graphics cards. An open source UVD driver which uses this interface will be included in the next major revision to Mesa 3D (version 9.2 or 10.0).

http://www.h-online.com/open/features/What-s-new-in-Linux-3-10-1902...

The last remaining major piece of functionality for the open source Radeon graphics driver still to be released is dynamic power management. Code for this functionality has been released by AMD but not in time for the Linux 3.10 kernel, so it will only become available for the next Linux kernel release (3.11).

http://www.phoronix.com/scan.php?page=news_item&px=MTM5NjE

Reply Score: 4

twitterfire Member since:
2008-09-11

It's still very weak compared to AMD binary driver. So if you ever need a GPU in linux for some serious applications like GPU rendering or GPU computing, you will still need AMD's binary driver.

Open source driver is fine though for running Tux Racer.

Reply Score: 3

lemur2 Member since:
2007-02-17

It's still very weak compared to AMD binary driver. So if you ever need a GPU in linux for some serious applications like GPU rendering or GPU computing, you will still need AMD's binary driver.

Open source driver is fine though for running Tux Racer.


A part of the relative lack of performance of the open source drivers up until now has apparently been due to the lack of dynamic power management. Apparently, due to the lack of dynamic power management, there was a risk that GPUs would overheat to the point of damage if the internal clocks ran too high. To avoid that risk, up until now, with the open source drivers the internal clocks have been hard-coded at their minimum setting.

Even with this penalty, the open source drivers have been achieving rendering frame-rate performance in most cases up to about 80% of that of AMD's binary driver for Linux. Good enough GPU rendering performance for all but high-end gaming and professional 3D graphics applications. More than adequate for the average Linux user use cases.

There is every hope and expectation that the remaining gap will be bridged once the dynamic power management functionality is introduced in the open source drivers which ship with kernel 3.11 and beyond.

Video hardware acceleration using UVD and dynamic power management are new features for the open source Radeon graphics driver for Linux. They will take a little time to mature, but once they do there will be absolutely no reason to use the closed source binary driver any more for either GPU rendering or multimedia rendering applications.

As far as GPU computing goes, it must be said that that still has a way to go. Perhaps next year. Fortunately, GPU computing is a minor use case, so the large majority of users will not need to go to the hassle of using AMD's binary driver after later this year.

Just in time, too, since it is rumoured that KDE won't work with Ubuntu's MIR, and so Kubuntu and other KDE distributions based on Ubuntu's codebase will have to move to Wayland.

http://blogs.kde.org/2013/06/26/kubuntu-wont-be-switching-mir-or-xm...

AMD's binary driver won't support Wayland, since Wayland has a dependency on kernel modesetting (KMS) so the driver must be a part of the kernel, and not a tacked-on-later binary blob.

Edited 2013-07-03 02:27 UTC

Reply Score: 4

lucas_maximus Member since:
2009-08-18

Even with this penalty, the open source drivers have been achieving rendering frame-rate performance in most cases up to about 80% of that of AMD's binary driver for Linux. Good enough GPU rendering performance for all but high-end gaming and professional 3D graphics applications. More than adequate for the average Linux user use cases.


The whole point of buying these cards is that you are using them for high end gaming and professional work.

Otherwise you don't need the card and are quite happy with whatever integrated graphics they give you.

His point still stands. The performance isn't upto par, and it still better at the moment to use the proprietary driver.

Edited 2013-07-03 08:13 UTC

Reply Score: 3

lemur2 Member since:
2007-02-17

" Even with this penalty, the open source drivers have been achieving rendering frame-rate performance in most cases up to about 80% of that of AMD's binary driver for Linux. Good enough GPU rendering performance for all but high-end gaming and professional 3D graphics applications. More than adequate for the average Linux user use cases.


The whole point of buying these cards is that you are using them for high end gaming and professional work.

Otherwise you don't need the card and are quite happy with whatever integrated graphics they give you.
"

There are a whole range of graphics cards made by AMD/ATI which are supported by the open source Radeon driver. This range does include cards designed for high end gaming and professional work, but it also includes a greater number of mid-range and low-end graphics cards (with an appropriate lower price) suitable for use by average users and even gamers who do not aspire to the very high end.

http://www.tomshardware.com/reviews/gaming-graphics-card-review,310...

http://www.tomshardware.com/reviews/gaming-graphics-card-review,310...

As far as performance goes, the first clear group of GPUs includes the Radeon HD 7870 LE and below. You see a great correlation between speed and cost up to this $250 card. Every dollar you spend yields a commensurate return in performance. In the value-oriented segment, that's exactly what you want to see.

Beyond that point, price increases faster than performance, negatively affecting absolute value. Having said that, if you're a hardcore gamer who desires high resolutions and taxing detail settings, cards like the GeForce GTX 670 or 770 will make the difference between playable and unplayable frame rates.


The above quote gives a fair description. One has to ignore price/performance tradeoff, and spend a disproportionate amount of money, before one gets in to the "hardcore gamer" performance category. This means most users are not in that category. Most users fall into the "value-oriented segment".

In the graphics cards hierarchy, my own modest desktop includes a Radeon HD 4650 and my laptop includes a Mobility Radeon HD 5430. These two are definitely not high-end cards. However, both of these offer better raw performance than the integrated Intel HD Graphics 4000.

His point still stands. The performance isn't upto par, and it still better at the moment to use the proprietary driver.


Not really. I happen to fall into fall into the "value-oriented segment". The current performance of my laptop and desktop graphics is only at 80% of its full raw potential, but it still matches 100% of what I would have got had I had a similarly-priced system with Intel HD Graphics 4000 instead.

I can look forward to continued out-of-the-box support and a further 20% performance improvement and support for Wayland in the future by staying with the open source driver, or if I really need to (its not my use case, but lets say it was) I can go to the trouble of installing a proprietary driver now and then upgrade to the open source one in a few months when the new features have stabilised and performance has improved.

So for my use case, and that of the significant majority of users of AMD/ATI graphics cards, the upcoming Linux kernel 3.11 and the next version of mesa unquestionably represent the point where it no longer makes any sense to continue using the proprietary driver.

If I had Intel hardware I would still enjoy out-of-the-box support and support for Wayland in the future, but I would not have any hope or expectation of an upcoming 20% performance gain.

Edited 2013-07-03 10:42 UTC

Reply Score: 2

lucas_maximus Member since:
2009-08-18

So you basically proven my point. Thanks.

So slower graphics cards will perform more poorly than they otherwise would have (so you made something that is slow run even slower, way to go), and those with the higher level cards wouldn't be using this driver because nobody going to pay a lot of money for a GPU and be happy with 80%.

Also the real WTF with you decision making is that you were quite happy to basically have hardware that you paid good money for not working as well as it could have because of your ideological need to use an open driver, rather than just install the driver that gives you everything you need.

EDIT: The AMD HD4000 series was introduced in 2008, so you been using something slower for 5 years while waiting for an open driver. This is idiotic.

Edited 2013-07-03 11:02 UTC

Reply Score: 2

lemur2 Member since:
2007-02-17

So you basically proven my point. Thanks.


WTF? Clearly you have reading comprehension difficulty.

So slower graphics cards will perform more poorly than they otherwise would have (so you made something that is slow run even slower, way to go), and those with the higher level cards wouldn't be using this driver because nobody going to pay a lot of money for a GPU and be happy with 80%.


These GPU's are not a lot of money. That is the point. They currently get as good performance as Intel integrated graphics for a similar price, yet they won't be obsoleted when the binary blob driver drops support for them, because there is a reasonable open source driver. As a bonus the open source driver has room for future improvement via the simple expedient of using a higher internal clock rate, which will become safe to do when dynamic power management becomes available in kernel distributions (it is available right now if you want to compile your own kernel).

Also the real WTF with you decision making is that you were quite happy to basically have hardware that you paid good money for not working as well as it could have because of your ideological need to use an open driver, rather than just install the driver that gives you everything you need.


There is a practical trade-off to be made that has nothing at all to do with ideology. I can use the open source driver out of the box with no need to expose my system to the problems of binary blob drivers, I get all the performance I need and I have spent no more money on hardware than any other option (Intel or nviidia, open or closed driver notwithstanding). AMD/ATI cards have the best value-for-money raw performance, so that I can afford a 20% performance hit and be no worse off than Intel or nvidia options for the same money.

EDIT: The AMD HD4000 series was introduced in 2008, so you been using something slower for 5 years while waiting for an open driver. This is idiotic.


The desktop system is more than 5 years old. Running Linux it meets all my performance requirements (it wouldn't be that great for Win7 or Vista) and it cost no more than any other option that would have given similar performance. Because it was AMD/ATI then yes I could have squeezed say 20% more out of it using a binary blob driver but I didn't need to and I didn't want to go to the unnecessary hassle.

Soon I will get a 20% performance boost that is a pure bonus. You wouldn't get that bonus from any other five+ year old graphics. If you were using a binary blob driver and a 5+ year old lower-end card, what you are more likely to get is discontinued support ...

Edited 2013-07-03 11:44 UTC

Reply Score: 2

lucas_maximus Member since:
2009-08-18

WTF? Clearly you have reading comprehension difficulty.


No I think you didn't really know what you said.

These GPU's are not a lot of money. That is the point. They currently get as good performance as Intel integrated graphics for a similar price, yet they won't be obsoleted when the binary blob driver drops support for them, because there is a reasonable open source driver. As a bonus the open source driver has room for future improvement via the simple expedient of using a higher internal clock rate, which will become safe to do when dynamic power management becomes available in kernel distributions (it is available right now if you want to compile your own kernel).


Possible Future improvements these improvements are 5 years coming.

Binary drivers almost never drop support until the graphics card is considered legacy. Just looking at the nvidia site now and the drivers go back to the GeForce 5 series which was almost 10 years ago.

TBH if you are running a card that is over 6 or 7 years old I doubt anything that using compositing is going to work that well.

IT doesn't matter how expensive something is I want it to work properly from day one , not possibly some time in the future when there is going to be faster kit at the same price.

There is a practical trade-off to be made that has nothing at all to do with ideology. I can use the open source driver out of the box with no need to expose my system to the problems of binary blob drivers, I get all the performance I need and I have spent no more money on hardware than any other option (Intel or nviidia, open or closed driver notwithstanding). AMD/ATI cards have the best value-for-money raw performance, so that I can afford a 20% performance hit and be no worse off than Intel or nvidia options for the same money.


There is very little practical tradeoff, installing the drivers these days is trivial on most popular distros.

What problems with binary blobs? Actually don't tell me, I had enough conversations with you to know that I am sure you will come out with the list of ridiculous reasons that most people wouldn't care about.

The desktop system is more than 5 years old. Running Linux it meets all my performance requirements (it wouldn't be that great for Win7 or Vista) and it cost no more than any other option that would have given similar performance. Because it was AMD/ATI then yes I could have squeezed say 20% more out of it using a binary blob driver but I didn't need to and I didn't want to go to the unnecessary hassle.


Installing the drivers are trivial these days. There is very little extra effort.

I been running a 8800GT until recently and I had proper driver support on Linux using the nvidia drivers since Fedora 7 or 8 until the latest OpenSuse. When I upgraded to a 660GTX it has been fine also.

Soon I will get a 20% performance boost that is a pure bonus. You wouldn't get that bonus from any other five+ year old graphics. If you were using a binary blob driver and a 5+ year old lower-end card, what you are more likely to get is discontinued support ...


Most people would have been using the driver that gave you the best performance and support since day one.

As I said before there is support for binary drivers all the way back to 10 years ago or the previous VESA drivers are probably more than good enough because seriously you aren't going to be able to run anything that decently if it requires any advanced GPU features.

Edited 2013-07-03 12:45 UTC

Reply Score: 1