Ian Murdock blogs about the importance of backward compatibility. “Yes, it’s hard, particularly in the Linux world, because there are thousands of developers building the components that make up the platform, and it just takes one to break compatibility and make our lives difficult. Even worse, the idea of keeping extraneous stuff around for the long term ‘just’ for the sake of compatibility is anathema to most engineers. Elegance of design is a much higher calling than the pedestrian task of making sure things don’t break. Why is backward compatibility important?”
The article mentions MacOS and MicrosoftOS and is on a Linux site.
I’m very very confused. This is just a flame fest and I know it.
Everyone here I suspect understands how important backward compatibility is, and how Microsoft make an awful lot of money from it, and why there is being so much money thrown at OS virtualization solutions.
That said with GNU/Linux the source-code is available for just about everything,so the only thing that suffers are the various binary blobs hanging around the place and anyone who uses linux can list them, and wishes it wasn’t the case.
Vista took 5 years for a new version, X-org released every 6 months; Linux every 3 months; Gnome every 6 months; Distributions like Ubuntu every 6 months etc etc. The advantages of living without Legacy programs are that clear.
/*The article mentions MacOS and MicrosoftOS and is on a Linux site.
I’m very very confused. This is just a flame fest and I know it.*/
nah, i don’t think it’s flame fest. he just wanted to mention two OS that actually have something going in the desktop market, for a chage. especially windows, because of it’s 96% desktop market share it has had for a long time.
Microsoft and its windows OS ( all of them put together ) never had 96% desktop market share worldwide , ever.
They only had OS called windows 95 and windows 98 , those where not there market share but the name of there OS.
What color is the sky in your world?
Basic physics proves the sky is black.
Deluded people think it is blue, but they are only seeing refractions through water particles in the atmosphere.
Interesting how my comment was modded down, but the orignal troll was voted up, man I love OSNews
What can I say personal insult about the weather , people expect better from you …
“orignal troll”
http://images.google.com/images?sourceid=mozclient&scoring=d&ie=utf…
Funny …
“nah, i don’t think it’s flame fest. he just wanted to mention two OS that actually have something going in the desktop market, for a chage. especially windows, because of it’s 96% desktop market share it has had for a long time.”
You know the answer to this one. Its been answered everywhere.
Mind you I have been playing with this for ages.
http://marketshare.hitslink.com/report.aspx?qprid=2&qpmr=15&qpdt=1&…
“http://marketshare.hitslink.com/report.aspx?qprid=2&qpmr=15&…”
From the link you posted how many Sytem76 laptop are reported and accounted for in your data ?
From the same link how many Dell workstation that where sold with GNU/Linux as default OS are accounted for ?
From the same link how many Acer Laptop that shipped with GNU/Linux in the software package but not installed is accounted for ?
The answer to all the above is they where not accounted for hence making you statistic false and incomplete. Also 40 000 website that use netapplication webtools does not make the internet or any valuable marketshare report that is worth reporting on.
I do not agree with the figures on the website, but they would not surprise me if they were accurate.
I would love to see real figures, but they unfortunately are hard to come by. I’d be happy for you to supply a better link.
Its a fun little tool. I think what it shows that is interesting is that if you look at *trends* that both Linux and Apple are making inroads into Microsoft’s Monopoly, the desktop…slowly.
http://www.google.com/search?sourceid=mozclient&scoring=d&ie=utf-8&…
I think this link will provide you with the answers you need , it as the answers to the link you gave. I will agree to disagree with you and leave it at that.
No it doesn’t.
No it doesn’t
To what.
You cannot get market share figures from web site statistics. This has been debunked time and time again over here. There are many reasons for this (biased, self-selected sample; User Agent masquerading or ambiguousness; persistent vs. ephemeral IPs; etc.), but mostly it is because it is *impossible* to get the number of individual visitors from web log analysis – the only thing you can get is the number of hits. Moreover, AOL users are notoriously overcounted when visiting pages with many images, registering as a different unique visitor for each.
Firms pretending to give you market share from web stats are basically the electronic equivalent of Snake Oil peddlers. Many in fact use the term “usage share” as a way to avoid getting in trouble with false advertisement laws.
Edit: here’s an example of a radically different picture…how do you reconcile this with the stats you provided? Well, you don’t, because both are by design incapable of giving actual market share figures…
http://www.w3counter.com/globalstats/
Edited 2007-01-15 20:03
As far as OS share goes, the two site provide the same results. Both give the Microsoft OSs about 93%, (85%+5%+2%+(<1%))
So what your rant was about, I don’t know.
I suggest you look further than the end of your nose, i.e. check out the Linux stats on both sites.
My “rant”, as you so eloquently put it, is that *by their very nature* web stats are woefully inadequate in providing market share numbers.
Everytime someone uses web stats as an indicator of market share, God kills a kitten.
I suggest you look further than the end of your nose, i.e. check out the Linux stats on both sites.
I believe that this was an argument about MICROSOFT market share, I’ll deal with Linux later.
My Comment related solely to your satemement about Windows Market Share figures:
here’s an example of a radically different picture…
when comparing
http://www.w3counter.com/globalstats/
with
http://marketshare.hitslink.com/report.aspx?qprid=2&qpmr=15&qpdt=1&…
Look at the figures shown by both sites:
Windows XP: NetApplications: 85.30% w3counter: 85%
Windows 2k: NA: 5% w3counter: 5%
Windows 98: NA: 1.77% w3counter: 2%
Windows ME: NA: 0.89% w3counter: <1%
Total MS Market Share: NA: 92.96 w3c: 93%
So this ‘radically different’ picture that you are claiming is actually a 0.04% difference. Given that the w3counter stats are shown to 0 decimal places, anyone can claim that (to within the precision used) the two stats are Identical concerning Microsoft.
Back to linux. The minimum difference between the two figures is 1.13% (taking the lower bound of 1.5% for the Linux share from the w3counter site). Now an absolute error of just over 1%, when compared to the quoted Microsoft shares, is NOT a significant difference between the figures, in this case, relative errors are not important.
I don’t care about any hypothetical god killing anything (unless it can be proved ;D ). I’m just pointing out the ridiculousness of your statement
I did misread the original text and didn’t realize it was only talking about MS market share. My bad. That still doesn’t mean that web stats can be used to determine market share. They can’t, and shouldn’t.
I think it is a flame fest and I think there are plenty of advertisers that are working for microsoft out there that like to flame Linux. It doesn’t matter it doesn’t fool the linux organization anyways. Anytime somebody comments about market share percentages (that are inaccurate due to not knowing how many people use linux as their desktop out there) are either another ignorant poster that believes the analysts and the media have correct statistics or are in fact paid off by microsoft some way which could be an organization that is making money because of microsoft or is employed by them.
The article mentions MacOS and MicrosoftOS and is on a Linux site.
Issues like backward compatibility transcend the particular OS, so this is a very appropriate article.
The advantages of living without Legacy programs are that clear.
You clearly are in one of the two camps that the author discussed, which makes the article all the more helpful.
I don’t think the article is helpful. If I wanted an article at all to discuss the advantages and disadvantages of backward compatibility on gnu/linux this was not it.
In reality what its really discussing one of the reasons why Linux has not taken over the world.
This does it *better*. Its an awesome article.
http://catb.org/~esr/writings/world-domination/world-domination-201…
You’re right about the article you referenced being awesome. I’m not done with it yet, but I can already see evidence of good insight.
Vista took 5 years for a new version, X-org released every 6 months; Linux every 3 months; Gnome every 6 months; Distributions like Ubuntu every 6 months etc etc. The advantages of living without Legacy programs are that clear.
The release cycle alone isn’t sufficient to tell which approach is superior. It simply tells that different organizations use different release models.
There are advantages to more frequent releases, such as earlier introduction of features. There are disadvantages, such as the amount of work it takes to engineer a release, and the amount of effort it takes a user to keep up with the release cycle.
A different approach is appropriate for Windows – people have invested their cash in software, and expect it to carry on working. In the Free (as in software) world, where upgrades are easily available at no cost the need for backwards compatibility is less.
Problems only arise with unmaintained programs that people would like to continue using, and with commercial software. I was irritated when my Corel Photopaint stopped working on new installs, as it had a couple of features Gimp (at that time) didn’t. I had to run it for a while on my SuSE 8.0 server until I didn’t need it any more.
edit: grammar corrected
On the plus side for Linux BSD etc. the freedom from implementing backwards compatibility gives a lot more coding freedom for improvements.
Edited 2007-01-15 12:11
On the plus side for Linux BSD etc. the freedom from implementing backwards compatibility gives a lot more coding freedom for improvements.
Actually FreeBSD maintains ABI compatibility through each major release. Additionally, you can run stuff from previous releases with no issues. I recently ran a full userland from 4.x on a 5.5 system with no hiccups, and then a 5.5 userland on a 6.2 system.
This also works well for stuff distributed in binary form like RAID management tools, old versions of Real Server and Oracle, etc.
The example on Ian’s blog post of altering the memory allocater to support an old broken program is totally crazy. Imagine if the Linux kernel developers did something like this: there’d be uproar!
Backward compatibility is less of an issue for free software for 2 reasons: (1) It’s free, so replacing the whole thing won’t cost you anything other than time, and (2) Free software tends to implement open standards, so if any given program stops working, you’ll probably be able to find a different program which implements the same standard.
–Robin
Backward compatibility is less of an issue for free software for 2 reasons: (1) It’s free, so replacing the whole thing won’t cost you anything other than time, and (2) Free software tends to implement open standards, so if any given program stops working, you’ll probably be able to find a different program which implements the same standard.
I agree, but I think you’ve missed the point: Backward compatability is very important for all of those commercial, closed source Linux applications. It’s probably why there’s so many commercial application and game developers working to get their proprietory applications/games working on Linux, and why the shelves of most computer stores are full of titles like “World Of Warcraft For Linux”…
> Backward compatibility is very important for all of those commercial, closed source Linux applications.
… and many of those open source applications. Many open source software projects have a need to supply binaries for their users. It is very nice if they don’t need a different binary for each version of each distributions.
I think many Linux users have sometimes got into the situation that a program they did download didn’t work. Sometimes compiling it yourself is easy, sometimes it is a major annoyance. The solution is backward compatibility.
No thats a different problem. Thats cross-distribution compatibility, and has nothing to do with backward compatibility.
The source is *simply* there to make transitions between platforms easier. Its available for users to compile, but its not mandatory, your Distribution of choice probably covers it, or you should request that it does, or your should move to a distribution that does.
This thread is already flammable there is no need to start a different thread.
> No thats a different problem. Thats cross-distribution compatibility, and has nothing to do with backward compatibility.
Why do multiple versions of distributions need different version then?
And most cross-distribution incompatibilities are cause by that distributions X uses version A of a library M and distribution Y uses version B of library M.
Waiting is no solution, because sooner or later distribution A or B will upgrade library N, making them again incompatible.
I agree, but I think you’ve missed the point: Backward compatability is very important for all of those commercial, closed source Linux applications. It’s probably why there’s so many commercial application and game developers working to get their proprietory applications/games working on Linux, and why the shelves of most computer stores are full of titles like “World Of Warcraft For Linux”…
I agree, but I don’t think the success of commercial, closed source Linux apps should take precedence over the progress of Free Linux apps. I would prefer that the developers (not that I have a say) spend their time on more worthwhile things than putting in hacks like what were described in the article just so that some closed source company’s binaries don’t break.
indeed. Ian acts like Microsoft does this Right ™ but it doesn’t – why did it take 5 years to write Vista, why isn’t Vista a big improvement over XP? Backwards compatibility. I’d say Vista is the best example of how costly backwards compatibility is, and why we don’t want it in linux.
I have great respect for Microsoft’s engineers, they are amazingly good at keeping things compattible – but i won’t pay the costs in terms of huge memory usage, lower performance and less innovation.
You’re correct but missing one point. There are at least four types of backwards compatibility:
(1) GUI backwards compatibility (i.e. things look consistent with the way things used to so you don’t have to relearn to GUI or command line options)
(2) File format or protocol backwards compatibility
(3) Binary Level Backwards compatibility
(4) Source Level Backwards compatibility
Microsoft is pretty bad about (1). They enjoy mixing things around almost every release since that’s one way to get people to believe that they’ve actually improved the current version. Most other software companies and open source try to minimize breaking (1) unless necessary. Keeping (1) whenever possible is *a good thing* since it ensures that retrying costs are low.
Microsoft is also pretty bad with (2). Try loading the same file into different versions of Word and you’ll get different results. Keeping (2) whenever possible is *essential* since it allows interoperability between different 3rd party applications.
Microsoft is pretty good with (3) and open source is pretty bad with (3) (unless you count the LSB which isn’t complete). This is very good for closed source vendors, but can lead to problems since it forces platform and compiler level constraints on the OS and application (e.g. byte ordering, padding for structures, stack order, register use, etc). This problems can result in your OS or applications trying to emulate the behaviour of archaic platforms long after it makes sense to do so. If the legacy really is that different, then the best thing to do is to handle it through emulation (e.g. WINE or MacOS9-on-MacOSX emulation) and stop making the problem worse. As they say, when you’ve discovered you’ve dug yourself into a whole, the first thing you need to do is to stop digging.
Finally, Microsoft is pretty good at (4) obviously, since it’s a special case of (3) and open source is very strong in this area. This is a *good thing* since rewriting a lot of code is time intensive, error prone, and distracting to the currently released version. (e.g. witness the long time it took GNOME 2.0 and KDE 4.0 to get released).
(1) GUI backwards compatibility (i.e. things look consistent with the way things used to so you don’t have to relearn to GUI or command line options)
(2) File format or protocol backwards compatibility
(3) Binary Level Backwards compatibility
(4) Source Level Backwards compatibility
Microsoft is pretty bad about (1). They enjoy mixing things around almost every release since that’s one way to get people to believe that they’ve actually improved the current version. Most other software companies and open source try to minimize breaking (1) unless necessary. Keeping (1) whenever possible is *a good thing* since it ensures that retrying costs are low.
I thought the newer version of windows – the less you have to learn. You can switch UI themes.
Microsoft is also pretty bad with (2). Try loading the same file into different versions of Word and you’ll get different results. Keeping (2) whenever possible is *essential* since it allows interoperability between different 3rd party applications.
I tried many times and it worked OK. IMHO the only glitches are when someone uses rare or weird features.
Microsoft is pretty good with (3) and open source is pretty bad with (3) (unless you count the LSB which isn’t complete). This is very good for closed source vendors, but can lead to problems since it forces platform and compiler level constraints on the OS and application (e.g. byte ordering, padding for structures, stack order, register use, etc). This problems can result in your OS or applications trying to emulate the behaviour of archaic platforms long after it makes sense to do so. If the legacy really is that different, then the best thing to do is to handle it through emulation (e.g. WINE or MacOS9-on-MacOSX emulation) and stop making the problem worse. As they say, when you’ve discovered you’ve dug yourself into a whole, the first thing you need to do is to stop digging.
byte ordering – can be solved by macros
padding – easily solved by platform independent data loader/writer and few macros
stack order, register use – who cares ? If you’re not going to use any stupid hacks it will work. As for registers – how would you recompile x86 asm program e.g. for MIPS ?
Finally, Microsoft is pretty good at (4) obviously, since it’s a special case of (3) and open source is very strong in this area. This is a *good thing* since rewriting a lot of code is time intensive, error prone, and distracting to the currently released version. (e.g. witness the long time it took GNOME 2.0 and KDE 4.0 to get released).
Open source is IMO nowhere near good in (3). Try running some software from e.g. RedHat 6 and Fedora 6 and try running soft from Win95 on WinXP – the latter will work in 99% of cases. (4) doesn’t imply (3).
isn’t the problem for (3) and (4) more the libraries? adding functionality, you can, but remove or change stuff in the library interfaces breaks (3) and (4). as soon as you want to rework stuff, you can’t. KDE does keep their library interfaces the same (or adds on them) during a X.y series (eg a app for KDE 3.0 works on 3.5) and changes library stuff eg for KDE 4. apparently, MS can’t do that even for stuff build for win 3.x, right? so they can’t fix bad interfaces. and they have hidden stuff (mentioned in the article) they expect to be able to change – but can’t, as some apps even use those.
it limits their ability to clean up and enhance current libraries, but they can of course add new ones (isn’t this the same in Gnome and KDE during a X.y series?).
> > (3) Binary Level Backwards compatibility
> > (4) Source Level Backwards compatibility
> (4) doesn’t imply (3).
I never claimed it did. I claimed the opposite. I think you’ve mixed up (3) and (4).
> byte ordering – can be solved by macros
> padding – easily solved by platform independent data
> loader/writer and few macros
> stack order, register use – who cares ? If you’re not
> going to use any stupid hacks it will work. As for
> registers – how would you recompile x86 asm program
> e.g. for MIPS ?
I think you’ve mixed up (3) and (4) since macros can’t help you with binary compatibility on *preexisting* code and stack ordering and register use *is* important since you can’t rewrite pre-existing programs and the behaviours of various members of the x86 line have evolved with time.
(1) GUI backwards compatibility (i.e. things look consistent with the way things used to so you don’t have to relearn to GUI or command line options)
(2) File format or protocol backwards compatibility
(3) Binary Level Backwards compatibility
(4) Source Level Backwards compatibility
Microsoft is pretty good with (3) and open source is pretty bad with (3) (unless you count the LSB which isn’t complete)
That’s plain wrong. FOSS is pretty good with (3) actually.
You’re getting it backwards. That’s some closed-source apps that have problems.
There’s just no FOSS app that has any backwards compatibility problem.
Case in point, I still use old gnome 1 programs like gcombust and cantus without any problem, I still use the very old mp3kult with the latest KDE 3, and I still can use Xv, and lots of very old X apps.
I still can play the very old Loki games and Neverwinter Nights on my uptodate Linux, so backwards compatibility is not the problem. Even Nero runs, but doesn’t work (uses deprecated interfaces).
You and the author are confusing broken apps with backwards compatibility problems.
The only app with backwards compatibility problem I have on my main Linux is the flash plugin, which uses some gcc 3.3 libstdc++. This is a C++ backwards compatibility problem, fixed by just putting an old gcc 3 lib. So there’s no problem.
This is very good for closed source vendors, but can lead to problems since it forces platform and compiler level constraints on the OS and application… This problems can result in your OS or applications trying to emulate the behaviour of archaic platforms long after it makes sense to do so
Agreed. That was a problem in gzip for example, and they fixed it by removing the old buggy code, as the platforms don’t exist anymore. So FOSS do these workarounds too when it’s deemed worthy.
>I agree, but I think you’ve missed the point: Backward compatability is very important for all of those commercial, closed source Linux applications.
It’s not a problem for commercial application but it’s a problem for non-free software.
And here we are at the crucial question:
1. Do we want a free OS which respects users freedom and offers technically the best we can?
Than it’s good to have no large backward compatability because it’s just not a good technically situation if you have to keep and maintain all the old code. It’s also not bad to have no large backward compatability because it doesn’t hurt the free OS and the free apps. So it has no drawback to reach goal 1.
2. Or do we want just another OS which makes it easy for people to use it (make money with their non-free software) but don’t contribute to the community and to the OS as a whole?
Than maybe we should have large backward compatability. But we have to know what impact it will have. Maybe we will have more users and more non-free apps. But on the other hand we would lose our focus on having an complete OS which respect users freedom and we would have to make many technical compromises. Also would it still be the OS we liked? Linux come into existence without caring about backward compatability or about some non-free vendors. Will it still be “our” OS if we start caring more about them than about us?
I would go with option 1. Everyone can have his own opinion on this topic but it is important to see what it would mean to the system we know and like.
1. Do we want a free OS which respects users freedom and offers technically the best we can?
Than it’s good to have no large backward compatability because it’s just not a good technically situation if you have to keep and maintain all the old code. It’s also not bad to have no large backward compatability because it doesn’t hurt the free OS and the free apps. So it has no drawback to reach goal 1.
I would hope we want a free OS that respects the users freedom to write either free applications or commercial applications, rather than forcing them (via “peer pressure” and technical reasons) to avoid Linux or try to make a living selling software for free (note: not all application developers can make a killing selling “support” for free software, and not all software can be partially funded by companies with their own objectives).
I also hope we don’t want an OS plaqued with backward compatability (so that the standard kernel can still run the punch-card software your Grandfather used as a kid). Instead I’d hope that a reasonable amount of backward compatability can be provided in other ways, like “legacy” shared libraries (and perhaps something like User Mode Linux sandboxes), so that you get as much (optional) backwards compatability as you want without the OS becoming a bloated mess for people who don’t need it.
Of course the other problem (cross-distribution compatability) is another huge disadvantage, but that’s been discussed elsewhere. I personally think it’s a symptom of an even larger problem (the conditions necessary to make a project successful in it’s initial stages aren’t the same as the conditions necessary to make a project successful in it’s later stages).
I would hope we want a free OS that respects the users freedom to write either free applications or commercial applications
That’s absolutely no problem, free software doesn’t exclude commercial software.
I agree, but I think you’ve missed the point: Backward compatability is very important for all of those commercial, closed source Linux applications.
Commercial, closed-source Linux applications are much more likely to used statically-link libraries (included in the application directory, most likely), so backward-compatibility is not really an issue, unless the app is so old that is uses a.out instead of ELF (and even then, I believe recent kernels still include a.out compatibility).
Note: IIRC the version of GCC used can have an impact…can anyone confirm this?
In any case I honestly believe that has very little to do with Linux adoption and/or availability of commercial software for it – there are many other factors for this, the main one being customer inertia and OEM pre-installing Windows.
Edited 2007-01-15 19:45
“Yes, it’s hard, particularly in the Linux world, because there are thousands of developers building the components that make up the platform, and it just takes one to break compatibility and make our lives difficult”
Well, it’s not *so* bad. In Linux you can switch your kernel from 2.4 to 2.6 with minimal efforts: updated modutils, etc. Try installing a win 2k3 kernel in a XP box!
I’d say that modularization encourages people to make their “modules” work with many different versions of other modules.
>>Try installing a win 2k3 kernel in a XP box! <<
It’s called Windows XP x64! That’s right the 64bit kernel of WindowsXP is built off the W2k3 sourcebase and not the 32bit XP sourcebase.
>> It’s called Windows XP x64! That’s right the 64bit kernel of WindowsXP is built off the W2k3 sourcebase and not the 32bit XP sourcebase. <<
Wow someone’s missing a point here. He was talking about installing a kernel of one kerneltree in a system of another kerneltree. One can easily port the XP stuff to the W2k3 kernel and then call it windows XP-64 (and even that took quite a while).
Let’s try anew: try to install a Win 2k3 kernel on a windows vista system.
Why does it matter…?
I didn’t like this article, but anyway, it’s a huge drawback for Linux as the average joe can’t just buy software on CD/DVD and hope it just works… get that 6 month old CD and re-install that software he needs again now…
The world doesn’t live up just from Software Livre (that would be awesome, but that’s not the reality), the average joe usually don’t care about this issues, they just want the hole think to work.
It’s not just Linux, MacOS X suffers from something similar… you’re always forced (indirectly) to upgrade to another OS X version (that means $ and some technical knowledge); but the OS X situation is WAY better than Linux, as the only vendor can make things somehow more controlled in terms of compatibility issues. (by the way, software installation method is amazing in this system! just drag,n drop to copy and run. Preferences files are standard and in a single location, etc)
This kind of issue also happens with Windows, of course… specially with professional software. There’s plenty of machines that still running Windows 2000 (even with recent hardware) because of old software (that still works, that have people how knows how to use it and, specially, because there’s no reason to spend money in another software as the one they own does what they need.). But again, it’s yet another degree.
The way linux distros are today, you’re bounded directly to their support and their software repositories and, if you’re not somehow experienced with Linux, unable to easily work with third party software.
This also becomes even more difficulty to small and even medium software makes support Linux in their software (well, maybe that one distro make and that specific version).
This new wave of web-apps (local, intranet or net-dependent) is a huge thing to help this problem, but still… most of professional programs and performance-dependent software can’t go this way.
Another problem is in the front of portable apps that can run from portable drives (usb drives, flash memory, etc…). We usually expect this software to run in several machines. It’s no difficult to do this with small (and even bigger ones like Firefox and Thunderbird) in one Windows environment, it’s ok in MacOS X too as you can run software from partition images, but it’s a bit harder with linux with you have some different versions and distros in your enviroment.
I’m not sure if someday this problem can be solved in Linux… but there’s some hope. Even if this problem can’t be solved, the situation can be WAY better if the programs agree with preferences and user databases/data. Even if someone is migrating from one system to another, and if the programs this person have doesn’t “just work” in the new system, they won’t mind that much if all pictures just work with the new system’s picture program. If all tags in photos are properly saved; if all it’s music collection still working, playable and properly organized; if all 3d models that person build professionally properly work with the new system… you get the idea.
It doesn’t solve the problem, but helps a lot!
Elegance of design is a much higher calling than the pedestrian task of making sure things don’t break.
In *this* long-time software developer’s opinion, an “elegant design” will result in a piece of software that maintains a fairly high level of backwards compatibility to start with.
People who design things for pure design’s sake and not for actual use in the field are Ivory Tower types who are likely not writing software intended for the serious external use by others. Or at least I sincerely hope not…
Elegance of design is a much higher calling than the pedestrian task of making sure things don’t break.
In *this* long-time software developer’s opinion, an “elegant design” will result in a piece of software that maintains a fairly high level of backwards compatibility to start with.
I think the author of the first quote would agree with the second. I don’t think these are incompatible statements. Yes, an elegant design results in long term backwards compatibility, but eventually the opportunity arises to make better code at the risk of breaking stuff. I’d rather have the devs go ahead and break stuff in this case.
The better code is usually a result of an inferior design moving to a better one. Arguably if they’d picked the best design from the start this wouldn’t be so prevalent; but it doesn’t take a software designer to recognize that the odds of hitting the best design on your first iteration is very very unlikely.
Still, a careful design often allows changing the designs of as many sub systems as possible, especially the ones most likely to change, such that breakage can be healed with compatibility code that doesn’t clutter up the new code.
But then you make a mistake and in order to fix it you break compatibility or live with the design you don’t prefer.
The blog is messed up because he’s obviously failed to read Joel in his entirety where he points out that a new camp has taken over Microsoft and that the Raymond Chen camp is dissappearing fast: Compatibility is not a priority for Microsoft anymore. And if you look at the .Net libraries you’ll see quite a few things getting deprecated, even entire stacks such as remoting!
Microsoft might choose to support some of these things for years and years to come, or it might not. It might “support” them while requiring code edits to keep them working (adding in little security “features” that break everyones code, which in the case of remoting might help as it might move them off remoting).
Aero’s fancy version, glass, and thw WPF engine that drives it drops out in a most unpleasant way when an application uses 3d devices in the non-WPF way. Maybe it was impossible to support this, but even then it should be possible to make a smoother transition. Or maybe Microsoft is giving these developers a harsh nudge to rewrite some of that code so that it works perfectly? Either way this goes no where near what they did for Sim City!
The author also fails to point out how Microsoft can’t sell to its own customers but Apple has no problem managing that… Maybe Apple doesn’t need to play compatibility games because their customers like upgrades where Microsoft customers fear them?
Edited 2007-01-15 17:52
Backwards compatibility per definition renders innovation meaningless.
No
Backwards compatibility per definition renders innovation meaningless
You may want to look at the history of the IBM 360 and its successors. There is signficant innovation over the forty years of that product line in both hardware and software, while maintaining very large degrees of backward compatibility.
I guess you would be surprised then that one of the most advanced and innovative OS around – Solaris, has Binary Application Guarantee – http://www.sun.com/software/solaris/guarantee.jsp
Dmitri
BeOS too cares about binary compatibility, R4 applications can still run on R6.1 (Zeta)…
careful library handling makes most of it.
Ian Murdock is pointing at windows problem to justify is own self declared importance of backward compatibility on GNU/Linux.
The reality problem Ian Murdock as is GNU/Linux is backward compatible in Source and in format.
Example : You can take all previous Gimp version and install it on a older and newer Linux kernel version , just need to recompile it.
Example You can open any gimp image made from previous version and save it and the older version will be able to open the new format.
I am sure he can find a GNU/Linux example , but its going to be on some obscure software that no one use , but the interesting thing is that its is day job to make sure it work and to fix the problem.
Backward compatibility is only a problem when :
– You don’t have software source access.
– You don’t have access to format spec.
– You don’t have access to the best and latest.
Its a total non issue on GNU/Linux.
There is a bigger problem with LSB compliance then there is with Backward compatibility.
He is suppose to know that ISV have a bigger problem with where the distribution put there library then with backward compatibility.
Backward compatibility is only a problem when :
– You don’t have software source access.
– You don’t have access to format spec.
– You don’t have access to the best and latest.
Yes and for most software that someone still actually uses, you can find a binary from your distribution’s repositories (if you don’t know/care how to compile it yourself). Or you can ask someone to compile it for you. Of course if the program is long dead and uses obsolete APIs, there is a problem. But then again you probably never paid a penny for this software in the first place, and you can always take the source and pay for someone to fix it.
Binary compatibility is only an issue for closed source software, which is not something free software community should sacrifice a lot for or go to great lengths for.
And byte-languages such as Java or Mono (C#, VB, etc..) offer better backwards compatibility. So closed source development for Linux should concentrate on these platforms.
Edited 2007-01-15 17:01
Binary compatibility is only an issue for closed source software, which is not something free software community should sacrifice a lot for or go to great lengths for.
This is not true. Binary compatibility is an issue for any site that runs multiple releases of the same distro. Running multiple releases of the same distro is not uncommon in commercial sites, as production systems tend to be spread over one or more distros and tend to be several distros behind.
Right now, the lack of binary compatibility across releases of Fedora Core is costing me time and effort, because I have to build a large application for FC6, FC5, FC4, and FC2 regularly and each one requires its own build, so I can’t build once and install the binaries everywhere.
The major reason why backward compatibility is important is that users often aren’t in a position to upgrade all of their systems at once, or on a single system, all of the pieces at once.
Right now, the lack of binary compatibility across releases of Fedora Core is costing me time and effort, because I have to build a large application for FC6, FC5, FC4, and FC2 regularly and each one requires its own build, so I can’t build once and install the binaries everywhere.
Is it really binary compatibility, though, or rather library compatibility? I know in the end it’s still a PITA, but it’s not exactly the same thing (in other words, could you solve this with statically-linked libraries?)
In this instance it’s both, although often it’s just libraries. Still, statically linking against libc (one culprit in this case) introduces problem as well as solves them.
This is not true. Binary compatibility is an issue for any site that runs multiple releases of the same distro. Running multiple releases of the same distro is not uncommon in commercial sites, as production systems tend to be spread over one or more distros and tend to be several distros behind
That’s wrong. The problem you have is package management, not binary compatibility.
Right now, the lack of binary compatibility across releases of Fedora Core is costing me time and effort, because I have to build a large application for FC6, FC5, FC4, and FC2 regularly and each one requires its own build, so I can’t build once and install the binaries everywhere
That’s complete BS. Again, binary compatibility is not what’s holding you back, that’s package management. More specifically, you have dependancies management problem.
You could perfectly well compile the dependancies that differs, and install them on every one of these distros.
Then, one build of your large application would work across all of these distros without problem.
That’s the basic process that comes to mind for these kind of situations.
The major reason why backward compatibility is important is that users often aren’t in a position to upgrade all of their systems at once, or on a single system, all of the pieces at once
And that’s mainly a closed-source app problem. On free Linux distros, that’s more of a support problem, and the amount you want to pay for this support. Supporting even free distros have a cost you know. That’s why you won’t see many people support an old FC2 for free. But every new FOSS app can run on an old FC2 without problem.
As for the backwards compatibility, every FOSS app that was on FC2 runs on FC6, either updated, or they replaced it by a better app. That’s only to minimise cost. Like I said, I have old apps that run just fine on my bleeding edge Linux OS, even closed source ones. But this mainly means “static compile”, and we don’t like that on Linux.
binary compatibility is not what’s holding you back, that’s package management. More specifically, you have dependancies management problem.
You could perfectly well compile the dependancies that differs, and install them on every one of these distros.
This, of course, assumes that the dependencies can be backported to the earlier distros, which is often more work than recompiling our application would be.
The major reason why backward compatibility is important is that users often aren’t in a position to upgrade all of their systems at once, or on a single system, all of the pieces at once
And that’s mainly a closed-source app problem. On free Linux distros, that’s more of a support problem, and the amount you want to pay for this support.
The “support problem” is a problem. Thanks for making my point.
This, of course, assumes that the dependencies can be backported to the earlier distros, which is often more work than recompiling our application would be
Which is total BS again. You say you have to *often* recompile your application several time for each FC. Now you tell me recompiling the dependancies that need recompiling *once* and for all is more work ? What kind of nonsense is that ?
And yes, these dependancies can be recompiled on the earlier distros, no problem at all. How do you think the legacy projects worked, by magic ? There’s no FOSS software I can think of that can’t, unless they’re tied to the kernel (sound, video). My Linux OS (all of them) are recompiled from source, so I know the matter pretty well, and I even fail to see what dependancies would need so much work as what you’re saying.
The “support problem” is a problem. Thanks for making my point
Delusional now ? Of course it’s a problem, it’s a problem with the user’s knowledge. Distros will provide you this knowledge for a price, that you’re not paying with FC. Anyway, it has nothing to do with “backward compatibility”, which was just my point. I never denied there was a cause.
You say you have to *often* recompile your application several time for each FC. Now you tell me recompiling the dependancies that need recompiling *once* and for all is more work?
Which part of “backport” did you fail to understand?
If it were a matter of recompilation only, it wouldn’t be an issue. It’s an issue when moving the new stuff back to the old distro requires backporting rather than simple recompilation.
My Linux OS (all of them) are recompiled from source, so I know the matter pretty well, and I even fail to see what dependancies would need so much work as what you’re saying.
It is obvious that you are looking at this from the perspective of an individual who has full control over the resources they utilize, rather than from the view of an organization which has to deal with real world constraints.
Your failure to see a problem makes that problem no less real for those of us who deal with it.
Binary compatibility makes managing such situations less difficult as does backwards compatibility.
I have had this idea for a while but in a day and age where file systems can function without the need to totally remove files( copy-on-write ) why dont the compilers just check for or placing older versions of the same libs/mods/.so’s in the same place?
Just thought I’d through that out there, because every time I hear the debate about backwards compatibility it seems to be that a lack of space is the main drawback.
The once flaw I can think of with an idea like that is when that old version of the application had a vulnerability that the new version fixed but because of ‘backwards compatibility’ the user even with the ability to doesn’t upgrade and remains vulnerable.
Browser: Links (2.1pre26; Linux 2.6.19-gentoo-r4 i686; x)
in a day and age where file systems can function without the need to totally remove files( copy-on-write ) why dont the compilers just check for or placing older versions of the same libs/mods/.so’s in the same place?
That’s because they don’t need too.
This backwards compatibility “problem” in Linux is complete BS. There is just no compatibility problem.
The problem is the amount of support you can pay for.
Compilers have nothing to do with placing your libs, they compile, that’s all. Majority of tarballs have makefiles that install everything, and no, they won’t remove your old libs at all. What is removing libs, is your package manager.
That’s for maintenance issues. The package manager is supposed to do the work for you, in case you don’t know what you’re doing, but it has limited AI.
It can’t read your mind to know you need to keep a library.
Just thought I’d through that out there, because every time I hear the debate about backwards compatibility it seems to be that a lack of space is the main drawback
No, it’s a problem of people that are not knowledgeable enough, and package managers that can’t read peoples minds.
Also, it’s the amount of support you can pay for. Most of these stupid “backwards compatibility” articles compare closed-source commercial OS with free Linux ones.
when you have 90% of the market you have everything to lose and nothing to gain if you drop backwards compatibility. People have thousands of dollars invested in Windows apps.
When you’re at 2% and you still keep compatibility just so gimp will work right away…you’re going to be stuck at 2% for a very long time. Again, people have thousands of dollars in windows apps and they won’t switch unless linux and its apps are really much better than windows apps.
That’s the real spot.
documented formats allow easy writing of migration tools, or fallback reading methods for reading data/settings/whatever.
Instead of weird binary, plain text files could allow migrating with one or 2 sed or awk lines.
Backward compatibility is priceless to everyone who uses computer for some serious work. For hobbists, who are all about “exciting new features”, it is probably not important. In fact the probably don’t like it, because slows down “exciting new features”.
Lack of backward compatibility has driven me from Windows, long ago. What I need is good, old, boring, predictble, uninteresting UNIX, with all those standards, drafts, like SVID, XPG4, SUS, etc.
DG
Binary compatibility only matters in situations where you want changes in the OS, but don’t want to leave that many users behind.
As others have cited, this is the case for Windows, which possesses extensive backwards compatibility.
Meanwhile, backwards compatibility is usually the least concern for spare-time FOSS developers, as they are more willing to move forward with development, even if it means throwing out anything that may hold them back.
Furthermore:
Windows = what brings in the most cash
FOSS = what brings in the most code
If you are switching from windows to mac or linux you have already given up backward compatibility with all your windows apps. So those groups are more likely to accept further drops in compatibility
As Cloudy and trembovetski mentioned, maintaining binary backwards compatibility for other OS’s like Solaris which is OSS, doesn’t hamper their ability to innovate. But it does require some serious engineering effort to do so. Just look at the amount of discussion in including ksh93 in opensolaris/solaris 11 because ksh93 isn’t completely (though mostly) compatible with AT&T’s original ksh88.
All the effort is basically shifting the burden from the user to the developers, which some people seem to take for granted, while I know some others appreciate.
In fact, I had a harsh reminder recently of the kind of breakage that can go on in Linux when trying to install Cisco VPN client software, which is distributed purely in source form where you need to compile the module on your system. .. God, sooo many small things have changed since the 2.6.x kernel interfaces that Cisco were using.. function arguments, structure member names, etc. All of which I certainly do not know how to fix. And even if it compiled, who knows if the semantics have changed for other things..
Just because the source is available doesn’t mean anything in an unstable API world if you don’t have someone who has full time responsibility of maintaining your needed code.
For example:
http://www.hipac.org/
nf-HiPAC v0.9.1 does not compile with linux kernel versions greater or equal to 2.6.14 because in 2.6.14 the number of parameters of the netlink_kernel_create() function was changed. Please apply this patch if you want to use nf-HiPAC v0.9.1 with kernel versions greater or equal to 2.6.14.
What if it was left like that for a while? What if it just stagnated? It’s still OSS — who would’ve picked up the slack if so? And such kinds of stagnation usually leads to a death spiral: the more broken it is, the less likely someone else wants to pick it up.
.. and the nf-Hipac update was a year ago.. who knows if things will break again with 2.6.20..