“This release adds support for bcache, which allows to use SSD devices to cache data from other block devices; a Btrfs format improvement that makes the tree dedicated to store extent information 30-35% smaller; support for XFS metadata checksums and self-describing metadata, timer free multitasking for applications running alone in a CPU, SysV IPC and rwlock scalability improvements, the TCP Tail loss probe algorithm that reduces tail latency of short transactions, KVM virtualization support in the MIPS architecture, many new drivers and small improvements.”
There’s a ” in the link. Correct link is http://kernelnewbies.org/LinuxChanges …..
Will the release notes compare it to Windows 3.11?
cfg80211: Extend support for IEEE 802.11r Fast BSS Transition
fast bss transition is taking way too long to become mainstream, if you ask me
Of personal interest to me, the open source Radeon graphics driver in the Linux 3.10 kernel now offers interfaces for interacting with the Unified Video Decoder (UVD) hardware on Radeon HD 4000 and later HD graphics cards. An open source UVD driver which uses this interface will be included in the next major revision to Mesa 3D (version 9.2 or 10.0).
http://www.h-online.com/open/features/What-s-new-in-Linux-3-10-1902…
The last remaining major piece of functionality for the open source Radeon graphics driver still to be released is dynamic power management. Code for this functionality has been released by AMD but not in time for the Linux 3.10 kernel, so it will only become available for the next Linux kernel release (3.11).
http://www.phoronix.com/scan.php?page=news_item&px=MTM5NjE
It’s still very weak compared to AMD binary driver. So if you ever need a GPU in linux for some serious applications like GPU rendering or GPU computing, you will still need AMD’s binary driver.
Open source driver is fine though for running Tux Racer.
A part of the relative lack of performance of the open source drivers up until now has apparently been due to the lack of dynamic power management. Apparently, due to the lack of dynamic power management, there was a risk that GPUs would overheat to the point of damage if the internal clocks ran too high. To avoid that risk, up until now, with the open source drivers the internal clocks have been hard-coded at their minimum setting.
Even with this penalty, the open source drivers have been achieving rendering frame-rate performance in most cases up to about 80% of that of AMD’s binary driver for Linux. Good enough GPU rendering performance for all but high-end gaming and professional 3D graphics applications. More than adequate for the average Linux user use cases.
There is every hope and expectation that the remaining gap will be bridged once the dynamic power management functionality is introduced in the open source drivers which ship with kernel 3.11 and beyond.
Video hardware acceleration using UVD and dynamic power management are new features for the open source Radeon graphics driver for Linux. They will take a little time to mature, but once they do there will be absolutely no reason to use the closed source binary driver any more for either GPU rendering or multimedia rendering applications.
As far as GPU computing goes, it must be said that that still has a way to go. Perhaps next year. Fortunately, GPU computing is a minor use case, so the large majority of users will not need to go to the hassle of using AMD’s binary driver after later this year.
Just in time, too, since it is rumoured that KDE won’t work with Ubuntu’s MIR, and so Kubuntu and other KDE distributions based on Ubuntu’s codebase will have to move to Wayland.
http://blogs.kde.org/2013/06/26/kubuntu-wont-be-switching-mir-or-xm…
AMD’s binary driver won’t support Wayland, since Wayland has a dependency on kernel modesetting (KMS) so the driver must be a part of the kernel, and not a tacked-on-later binary blob.
Edited 2013-07-03 02:27 UTC
The whole point of buying these cards is that you are using them for high end gaming and professional work.
Otherwise you don’t need the card and are quite happy with whatever integrated graphics they give you.
His point still stands. The performance isn’t upto par, and it still better at the moment to use the proprietary driver.
Edited 2013-07-03 08:13 UTC
There are a whole range of graphics cards made by AMD/ATI which are supported by the open source Radeon driver. This range does include cards designed for high end gaming and professional work, but it also includes a greater number of mid-range and low-end graphics cards (with an appropriate lower price) suitable for use by average users and even gamers who do not aspire to the very high end.
http://www.tomshardware.com/reviews/gaming-graphics-card-review,310…
http://www.tomshardware.com/reviews/gaming-graphics-card-review,310…
The above quote gives a fair description. One has to ignore price/performance tradeoff, and spend a disproportionate amount of money, before one gets in to the “hardcore gamer” performance category. This means most users are not in that category. Most users fall into the “value-oriented segment”.
In the graphics cards hierarchy, my own modest desktop includes a Radeon HD 4650 and my laptop includes a Mobility Radeon HD 5430. These two are definitely not high-end cards. However, both of these offer better raw performance than the integrated Intel HD Graphics 4000.
Not really. I happen to fall into fall into the “value-oriented segment”. The current performance of my laptop and desktop graphics is only at 80% of its full raw potential, but it still matches 100% of what I would have got had I had a similarly-priced system with Intel HD Graphics 4000 instead.
I can look forward to continued out-of-the-box support and a further 20% performance improvement and support for Wayland in the future by staying with the open source driver, or if I really need to (its not my use case, but lets say it was) I can go to the trouble of installing a proprietary driver now and then upgrade to the open source one in a few months when the new features have stabilised and performance has improved.
So for my use case, and that of the significant majority of users of AMD/ATI graphics cards, the upcoming Linux kernel 3.11 and the next version of mesa unquestionably represent the point where it no longer makes any sense to continue using the proprietary driver.
If I had Intel hardware I would still enjoy out-of-the-box support and support for Wayland in the future, but I would not have any hope or expectation of an upcoming 20% performance gain.
Edited 2013-07-03 10:42 UTC
So you basically proven my point. Thanks.
So slower graphics cards will perform more poorly than they otherwise would have (so you made something that is slow run even slower, way to go), and those with the higher level cards wouldn’t be using this driver because nobody going to pay a lot of money for a GPU and be happy with 80%.
Also the real WTF with you decision making is that you were quite happy to basically have hardware that you paid good money for not working as well as it could have because of your ideological need to use an open driver, rather than just install the driver that gives you everything you need.
EDIT: The AMD HD4000 series was introduced in 2008, so you been using something slower for 5 years while waiting for an open driver. This is idiotic.
Edited 2013-07-03 11:02 UTC
WTF? Clearly you have reading comprehension difficulty.
These GPU’s are not a lot of money. That is the point. They currently get as good performance as Intel integrated graphics for a similar price, yet they won’t be obsoleted when the binary blob driver drops support for them, because there is a reasonable open source driver. As a bonus the open source driver has room for future improvement via the simple expedient of using a higher internal clock rate, which will become safe to do when dynamic power management becomes available in kernel distributions (it is available right now if you want to compile your own kernel).
There is a practical trade-off to be made that has nothing at all to do with ideology. I can use the open source driver out of the box with no need to expose my system to the problems of binary blob drivers, I get all the performance I need and I have spent no more money on hardware than any other option (Intel or nviidia, open or closed driver notwithstanding). AMD/ATI cards have the best value-for-money raw performance, so that I can afford a 20% performance hit and be no worse off than Intel or nvidia options for the same money.
The desktop system is more than 5 years old. Running Linux it meets all my performance requirements (it wouldn’t be that great for Win7 or Vista) and it cost no more than any other option that would have given similar performance. Because it was AMD/ATI then yes I could have squeezed say 20% more out of it using a binary blob driver but I didn’t need to and I didn’t want to go to the unnecessary hassle.
Soon I will get a 20% performance boost that is a pure bonus. You wouldn’t get that bonus from any other five+ year old graphics. If you were using a binary blob driver and a 5+ year old lower-end card, what you are more likely to get is discontinued support …
Edited 2013-07-03 11:44 UTC
No I think you didn’t really know what you said.
Possible Future improvements these improvements are 5 years coming.
Binary drivers almost never drop support until the graphics card is considered legacy. Just looking at the nvidia site now and the drivers go back to the GeForce 5 series which was almost 10 years ago.
TBH if you are running a card that is over 6 or 7 years old I doubt anything that using compositing is going to work that well.
IT doesn’t matter how expensive something is I want it to work properly from day one , not possibly some time in the future when there is going to be faster kit at the same price.
There is very little practical tradeoff, installing the drivers these days is trivial on most popular distros.
What problems with binary blobs? Actually don’t tell me, I had enough conversations with you to know that I am sure you will come out with the list of ridiculous reasons that most people wouldn’t care about.
Installing the drivers are trivial these days. There is very little extra effort.
I been running a 8800GT until recently and I had proper driver support on Linux using the nvidia drivers since Fedora 7 or 8 until the latest OpenSuse. When I upgraded to a 660GTX it has been fine also.
Most people would have been using the driver that gave you the best performance and support since day one.
As I said before there is support for binary drivers all the way back to 10 years ago or the previous VESA drivers are probably more than good enough because seriously you aren’t going to be able to run anything that decently if it requires any advanced GPU features.
Edited 2013-07-03 12:45 UTC
And as I said before, this is where you are utterly wrong. The current drivers work perfectly well for desktop compositing and all but the most demanding 3D games.
What is wrong with you? Why would you even try to deny this? After all, I am the one actually running the Linux desktop compositing on my pedestrian older machine (WinXP era) with its modest graphics card using the open source drivers. I can see it running before me as I type this. It works beautifully.
So why do you fib so? I truly just don’t get it, I really don’t. Where does it get you?
I not fibbing, the fact of the matter is that for 5 years the hardware has been running sub-optimally and you just put up with it when in most distros it is pretty simple to install the driver with your package manager.
If it was 2004, and you had to go through the procedure I did with Debian, then I would understand.
I quote: the previous VESA drivers are probably more than good enough because seriously you aren’t going to be able to run anything that decently if it requires any advanced GPU features
That’s a fib.
Binary blob drivers requires that the driver wrappers are re-compiled whenever a new kernel is installed, which in turn is usually managed by dkms.
http://en.wikipedia.org/wiki/Dynamic_Kernel_Module_Support
DKMS is prone to occasional failure, even in 2013. DKMS also requires that you have the kernel header files installed. The version of the header files must of course match the version of the kernel. All of this makes updating the kernel a much more involved and risky process than it is compared to using open source drivers and not needing dkms.
Even once you have it installed properly via dkms, the drivers effectively become kernel-loadable modules.
https://en.wikipedia.org/wiki/Loadable_kernel_module
The kernel loads these modules after it starts, the kernel therefore cannot initialise these devices along with the kernel itself. Open source graphics drivers do not require dkms, and they also enable kernel modesetting.
http://en.wikipedia.org/wiki/Mode_setting
Also, because the loadable module is not part of the kernel itself, it ends up meaning that the X-Server has to be a root process. In order to run an X-server as a userland process, the driver must be part of the kernel itself, and it must be possible to use kernel modeseting.
http://www.phoronix.com/scan.php?page=news_item&px=NzM2MA
This means that if the X-server crashes when running a binary blob driver, it brings down the entire machine. You can’t just kill the program which caused the x-server to crash and then re-start the x-server.
There is a similar problem in respect to running wayland. Wayland also depends on kernel modesetting.
One really, really doesn’t want to run a binary blob graphics driver unless one really has to.
Edited 2013-07-04 10:36 UTC
No it isn’t. You are being a dick. If you have a el cheapo graphics card is over 6 years old you are going to have problems. I have it on my intel based Dell, so I have to run Mate instead of the newer DEs.
Look I know how it works. I had to do it quite a few times back at the start of the last decade.
But lets be realistic the package manager on most distros do all the hard-work for you. Wayland isn’t around yet and tbh I’ve seen the X server crash a handful of times over the years.
You know if you were using like a decent operating system like Windows 7 that has a stable ABi/API and could recover the driver properly when it crashes .. it wouldn’t matter whether the driver was open source or not.
Edited 2013-07-04 12:22 UTC
[/q]
It is a fib without a doubt. I have two el cheapo graphics card is over 6 years old and I run a feature-full compositing desktop (KDE 4.10.4, the very latest version) without the slightest problem. Fast, slick, responsive, powerful, stable, and elegant. What is wrong with you? Why would you deny this?
It does the hard work of installing a binary blob driver for you when it works. When it doesn’t work you are in a world of pain.
You ignore another major problem:
http://en.wikipedia.org/wiki/Kernel_mode-setting
“User-space mode setting would have needed superuser privileges for direct hardware access, so kernel-based mode setting increases security because the user-space graphics server does not need superuser privileges.”
Also with regard to X-server crashes, the wikipedia article on modesetting refers to the page on “screens of death”.
http://en.wikipedia.org/wiki/Screens_of_death
“A kernel panic is used primarily by Unix and Unix-like operating systems: the Unix equivalent of Microsoft’s Blue Screen of Death. It is used to describe a fatal error from which the operating system cannot recover.”
Without a doubt the biggest cause of kernel panics these days are binary blob drivers.
You know that with a decent operating system like Linux, if you use an open source driver instead of a binary blob driver, you can recover the driver properly when it crashes. All that is required is that the driver is part of the kernel, it doesn’t matter whether the OS was Windows or not.
So I repeat myself for the sake of those a bit slow on the uptake: one really doesn’t want to run a binary blob driver with Linux unless one really has to. Binary drivers are really, really worth avoiding.
Edited 2013-07-04 13:20 UTC
Because it isn’t true for the majority of graphics cards which are intel based, me and you were lucky enough to get some decent in the machine.
Guess what on the same graphics card that KDE works really badly on(intel on my laptop circa 2006) , I can run a full Windows 7 desktop with buggar all problems.
Gnome 3 and KDE 4 run like arse and I am force to use MATE which is a Gnome 2 fork.
So stop trying to change the circumstances to fit your argument.
Your machine isn’t the whole world you know.
Wouldn’t be a problem if the kernel had a proper stable ABI/API for drivers and then all a driver creator would need to do is respect the interface provided.
Also believe it or not, I don’t care if my desktop system is really stable or not or my crappy laptop that I use for work when I am on the road.
If my machine crashes twice a week I really don’t give a shit because it takes less than 30 seconds to boot.
If it was crashing every hour I might give a shit. Even if it was that much of a problem I would just run Linux in a VM and again I wouldn’t give a shit.
Windows doesn’t need the driver to be part of the kernel because it designed properly with a well defined and documented interface.
Sorry Windows can recover properly without the driver requirement being open source, don’t try to spin it to your beliefs.
If they bothered having a stable API/ABI like proper software engineers provide, it wouldn’t matter if the driver was open source or not.
You will try to deny this fact because you are a zealot. But this is the truth of the matter whatever you say because it is 100% verifiable.
It has even been mention in one of the news articles on site how much better Windows 7 graphics stack is.
http://www.osnews.com/story/21999/Editorial_X_Could_Learn_a_Lot_fro…
It only needs avoiding because of the stupid decision made by Greg H.
Look you can claim that I am slow and not understanding it, but every single decent software engineer knows that providing a stable and well documented interface is a good thing. Just because you don’t get it and have bought into Greg H’s bullshit doesn’t mean you are right.
In windows the drivers are as much as part of the kernel as they are in linux (with lernel modules). The only difference is that Windows provides a stable kernel API/ABI, whereas linux only provides a stable userspace API/ABI (look at the ntfs-3g driver implemented in userspace). You can bitch all you want, that’s how it is. Both approaches have their merits.
Just a reminder : the XP -> Vista transition broke that API, and drivers had to be rewritten…
I agree that binary blobs are to be avoided, 90% of all issues I encountered I traced to blobs.
Now to the radeon driver : it provides everything BFUs need right now. It works out of the box. What else do you want? Extreme performance? The devs were catching up on supporting 5 hardware generation up to today, I’d say what we have now is pretty good. A lot of the performance might be in powermanagement, and some in the shader compiler that is still WIP.
Bottom line is : I use it, and I’m happy with it.
Serafean
Broken after 5 years I might remind you.
TBH I don’t really care. I like getting in this guys face because of he takes this idealogical attitude about everything even when it doesn’t make sense to.
The guy basically said once I should learn GTK with C# bindings on Linux when I am a ASP.NET developer. He also said this golden nugget of stupidity.
http://www.osnews.com/thread?491266
I know there are pros and cons about it but whichever way you look at it video card drivers have been a problem for a while on the desktop … in a perfect would we would have open source drivers for everything … but that the thing we don’t.
I think you have me confused with someone else.
It was you.
My comment on C#, if I had made one, would probably have been more like this: “A nice enough language, such a pity about the proprietary class libraries associated with it. It seems designed to create a whole group of developers whose only skill is to write only for the Windows desktop. That will surely end in tears one day”.
http://thevarguy.com/business-smartphone-and-tablet-technology-solu…
“Gartner predicts Microsoft (MSFT) Windows will have only 15 percent market share in 2014.”
No you did make one, I am not going to wade through my hundreds of posts, but it was exactly like I said, mainly because it was so idiotic.
.NET/Mono pretty much runs anywhere now, from the iPhone to Playstation VITA, so you can use C# anywhere. The unity engine is built on top of mono and many people are making cross platform games with it.
Also I am an ASP.NET developer, so it doesn’t matter what the people that are using the services that I develop are using because it all standard web stuff.
Also a good programmer it doesn’t matter what language they learn they are able to pick up another easily, the principles are pretty much the same across most languages.
You really are clueless.
Edited 2013-07-05 12:07 UTC
Sure you can use C# anywhere. What you can’t do is use the associated proprietary class libraries such as Winforms anywhere. So if one writes software using C# in conjunction with Winforms, then it becomes Windows-only software.
If you want to write software which is inherently not Windows-only software, you need to avoid class libraries which are Windows-only. So perhaps use GTK or Qt, and not Winforms. Users of .NET almost universally fail to do this, and hence they limit themselves to writing only for Windows, which as Gartner points out in 2014 is expected to fall to 15% of the market.
You really are clueless.
Edited 2013-07-05 12:20 UTC
Which is trivial to do.
There is plenty of tooling existing that lets you do this, just because most don’t need to doesn’t mean you can’t.
Also a lot of this software isn’t going to magically dissapear after 2014.
You really are clueless. [/q]
Of course. Likewise, just because you can doesn’t mean that most do. Ergo, most .NET developers end up writing Windows-only software, and they do not ever gain the knowledge/bother to learn alternative class libraries which would allow their software to be cross-platform.
Of course not. However the market in which to sell the Windows-only software will shrink relative to software which is written in other languages/to other APIs.
Sorry but most people don’t learn class libraries they just look at the library docs when they need to.
All one needs to know how to write software in any language is understand how the language features work.
True. None of this is of any help once you have written your Winows-only C# application utilising Windows-only class libraries. You will only ever be able to sell that app to people running Windows.
To sell it to the much wider market you will need to re-write it from scratch, probably in some other language.
Most C# code is bespoke stuff that lives in large organisations much like Java did/does.
This is utter rubbish. If the logic of the application is split up correctly, it be pretty easy to refactor or remove the Windows only components. There is plenty of tooling available to make this trivial.
In any fact, I develop ASP.NET web applications, as I said before I don’t care what they happen to be viewing it on because it just a webapp.
Edited 2013-07-05 13:29 UTC
But as I understand it, the webapp itself will have to be hosted on a Windows server. Isn’t your customer the person who runs the server, rather than the people who run the webapp?
Edited 2013-07-05 13:39 UTC
Yep.
Wasn’t it you who said, and I quote: “If something is changing that can cause defects.”
Correct me if I’m wrong, but won’t C# code using ASP.NET only run on Windows servers?
Linux is the market leader for web servers.
Correct. If you write a graphics card driver to a stable API as freedom software, then your driver can become part of the kernel source tree, and so it will be automatically re-compiled, packaged and shipped with every new kernel release. It therefore doesn’t require a stable ABI.
http://www.linuxdriverproject.org/mediawiki/index.php/Main_Page#Abo…
In fact, as a device maker, you don’t even have to write your own Linux driver. The Linux Driver Project developers will happily write one for you:
“We are a group of Linux kernel developers (over 400 strong) that develop and maintain Linux kernel drivers. We work with the manufacturers of the specific device to specify, develop, submit to the main kernel, and maintain the kernel drivers. We are willing and able to sign NDAs with companies if they wish to keep their specifications closed, as long as we are able to create a proper GPLv2 compliant Linux kernel driver as an end result. “
Edited 2013-07-05 09:40 UTC
The original quote we were talking generally about Software Engineering. So don’t try to twist it. You statement there is pretty idiotic to say the least. I find it flabbergasting that anyone that claims to do development would hold such an opinion.
Back to kernel interfaces.
Also some companies like to keep ownership of their code base to ensure quality, rather than relying on a 3rd party on the dubious guarantee that it would be supported.
While there are a lot of drivers supported, I am willing to bet quite a few aren’t fully featured drivers. There is no guarantee that their driver will be fully featured or continue to be so once they release ownership of the code base.
None of what you say addresses these concerns. The reasons for having a Stable API/ABI are far more complicated (and some of these are human factors) than you are willing to admit or realise.
Your blinkered logic is simply incompatible with reality.
Edited 2013-07-05 09:53 UTC
If they don’t write the driver, but merely give specifications to the Linux Driver Project (under an NDA if they wish), then they don’t ever “release ownership of the code base”, because it isn’t their codebase. The code belongs to its authors, who in this case would be the Linux Driver Project. In the case of the Radeon graphics driver for Linux, some of the code is AMD/ATI code but some of it is Xorg code. Xorg maintain the codebase.
The Linux Driver Project would of course be as keen as mustard to write as “fully featured” code as is possible according to the specifications they receive. Manufacturers who want their devices to have a decent “fully featured” Linux driver, with no effort on their part, will of course be as keen as mustard to give the Linux driver developers all the specification details they need. As far as “continue to be so” goes, once the code is written it is written. As long as the device specifications don’t change it would be insanity to remove functionality from the already-written driver.
I concur about a stable API, that unarguably is a good thing. With a stable API, one has no need to keep amending the source code.
Freedom software has no need for a stable ABI, however. With freedom software, one continuously re-compiles it and re-releases it with every new version of any code it links with anyway.
Edited 2013-07-05 10:20 UTC
Which is a fucking stupid way of managing dependencies. If you think this is a good idea you are quite frankly a moron.
Every tiny change in a software project can cause a bug, and constantly changing little things is a good way to cause breakage. This is software engineering basics and it obvious you don’t understand it.
Excuse me? It is the way that all software development works. Take a look at a makefile.
You clearly don’t understand the difference between an API and an ABI. The Linux kernel typically maintains stable APIs but not ABIs. This means that the source code of drivers need not change, but binary entry points do, so the binaries of the drivers does need to change.
If you update the kernel, the source code of drivers typically does not change (because as I said the API is normally stable). Hence this is not a mechanism to introduce breakage. A driver module which has been compiled against the incorrect version of Linux kernel headers simply won’t load.
Edited 2013-07-05 12:29 UTC
Err no it isn’t.
Sensible platforms go and ship the binary that it is built against with the distributable.
If the library has changed there is no way without testing it to verify if its behaviour has stayed the same or not.
If something is changing that can cause defects. It doesn’t matter whether it is an ABI or an API.
If that were true then you would get bugs which only occurred if a library happened to load in some memory locations but not others. After all, the addresses of all the target address locations of all jumps and calls within a piece of code would be changing depending on where in absolute memory the code was loaded, which, according to you, “that can cause effects”.
Exactly so. People who owned machines for which the graphics card was perfectly functional and had decent performance, but which was out of production, faced the strong possibility that they would have no upgrade path to Vista and Win7.
Exactly so. Furthermore, Linux kernel maintainers have no ability to address the issue. One is entirely dependent on the whims of a proprietary supplier who in turn has little motivation to fix the issue.
My point exactly. How could anyone really logically conclude otherwise? One would have to have some kind of ideological devotion to closed source software in order to put up with binary blob drivers (when there was a perfectly usable open source alternative).
Why do you still bother with him? (somebody who even gloats in profile about his conversational deficiencies) Hardly anybody does any more…