So imagine my surprise when I dug around in a quarter-century-old archive to find a .zip file containing something that purported to be the original executable of Labyrinth. Surely such an ancient piece of code – written for Windows 3.1 – wouldn’t launch?
Well, after a bit of fiddling with the Windows compatibility settings, I was shocked – and extremely pleased – to see that, yes, it most certainly did.
It shouldn’t be surprising that a piece of good Windows code from 30 years ago still runs on Windows 10 today, and yet, it always is.
Can somebody try on Linux with an executable of that era ?
Probably wont work, because glibc symbol versioning was introduced later.
https://maskray.me/blog/2020-11-26-all-about-symbol-versioning
Unless it is statically linked.
In 1994, a binary would be an a.out binary. Those still supported shared libraries, but used a completely different shared library format to ELF. Those are actually surprisingly compatible, because it requires a complete stack of period shared libraries, and the only compatibility edge is the kernel.
My personal gripe is that compatibility gets much worse moving slightly newer. I’d really like to be able to run Linux Netscape 4, but it’s not really possible because of incompatibilities in libstdc++.
It’s still compatible unless you’ve compiled the kernel without a.out support (although perhaps some distros do these days?).
You need the same complete stack of libs with a newer binary too, eg netscape 4 will work if you supply it with the libstdc++ version it wants.
The difference here between linux and windows is that microsoft includes copies of all the old versions of the libs by default, whereas linux distros generally don’t. On linux, running old binaries is a niche thing that not many people do (most things have been recompiled) so there’s no point bloating up the installs of everyone else with sets of old libraries they won’t ever use.
The other difference is that there was never a 16bit version of linux, although sourcecode for unix systems that predates windows will often compile and run on a modern linux.
bert64,
If somebody can show how to make Netscape 4 run on a modern system, I’d love to see it. Obviously I tried the obvious parts like replacing libstdc++ with no luck. I think these libraries end up with cross-library dependencies so that, like a.out, it requires an entirely custom stack. What makes a.out work well though is it has a different ld-linux.so loader, whereas with ELF AFAIK that’s not really possible. Whenever I talk about this on forums, people always tell me that it should work, but nobody ever demonstrates that it does.
Agree with the general point that Linux distributions don’t include compatibility libraries that Windows would include. Netscape 4 aside, there’s no reason a distro can’t include QT1/2 or GTK1 to support applications using common older libraries.
I would really like a statically linked fistro, any recommendations? I don’t care about miniscule disc and memory usage increase and upgrading binaries is dead easy with a package manager, much easier than resolving dependencies.
Having said that, dynamic linking is not the only problem in Linux. No one gives much though about cross-version compatibility of data files used by libraries. It doesn’t also help a lot of shared functionality has moved to services, again, with poor track record of interface stability. The only part that remains stable is the kernel (thanks Linus) but even that is being circumvented by growing a meta-kernel above it (systemd).
I found this one https://sta.li/, it doesn’t use GNU C Library but Musl.
Looks like it cannot be done (the resulting binaries cannot be distributed) for licensing reasons (LGPL library and a non-GPL application). LGPL was specifically written to allow dynamic linking only.
Dynamic linking is also needed for run-time extensions. Glibc is doing that for locales etc.
The closest solution is then to use dynamic libraries but bundle them with applications via LD_PRELOAD. This is incidentally how most commercial Linux applications are shipped.
Yes, binary compatibility is taken seriously in the Linux kernel. The explicit goal is “30 years compatibility”:
https://sudonull.com/post/146239-Linus-Torvalds-on-binary-compatibility
On the other hand:
– Drivers drop out of kernel trees, so old hardware might no longer be working to support these binaries (Obviously there is no kernel ABI, so driver back compat is out of the question)
– As mentioned, any dynamic libs would not be there
– And if this is a desktop app, it would require an X11 server to be running (not always the case with Wayland, and co), and probably only Xt or other primitive toolkits would be supported
Windows tries to keep the kernel API, user API compatible across decades, and kernel ABI to a certain major version. But of course that comes with a lot of cost (16+GB minimum install).
sukru,
Quoting Linus from your link:
It comes as no surprise that Linus is so adamant about compatibility, after all unix compatibility was the main selling point for Linux in unix circles. Arguably he could have improved on unix in many ways, but ironically it would have detracted from linux’s popularity in unix circles who just wanted their existing software to run on a free unix clone.
I wish Linus would take a similar stance on kernel-space compatibility as that would help alleviate the incompatibilities between manufacturer builds and mainline linux kernels. Alas, I don’t think that’s going to happen. I’ve pretty much given up hope of mainline linux simply working on all commodity ARM hardware.
Solaris was pretty polished and took compatability seriously. There’s some nice mechanisms in there.
Alfman,
I found that kernel compat issues are actually worse than just cross-vendor. Even with the same sources, a minor config change can render modules incompatible (the size or layout of the internal structures change).
That being said, I can empathize with Linus on binary modules. They would have hindered actual progress in kernel space.
I remember when Microsoft introduced PatchgGuard in the 64 bit kernel, security companies (Symantec?) ran a full page ad accusing them of locking competition out: https://searchsecurity.techtarget.com/tip/Microsoft-PatchGuard-Locking-down-the-kernel-or-locking-out-security . (I personally have strong opinions on these “so called” security programs, but let’s not delve into that).
If Linux had accommodated some sort of driver ABI, they would be subject to whims of then large entities. I can easily imagine nvidia telling: “no you cannot change the power management system, since it will break our binary blobs”.
Yet, I think it might be time to think a better strategy. I don’t have any ideas though. As you said, especially the ARM situation is really bad.
sukru,
That’s why I’m in favor of having compatibility within major versions allowing it to break periodically for the sake of progress. Sort of like “LTS” releases. This is a compromise that would drastically improve our ability to use our own kernels while not permanently setting anything in stone.
Well, we’re already accustomed to ABIs breaking nvidia drivers anyways, so I don’t see a reason it would necessarily be any worse than today. As a cuda developer I don’t use Nouveau drivers, but I think for most people they’re good enough to shield them from nvidia’s policies.
Yeah, this is the reason I favor a compromise. IMHO it would be a reasonable approach, but then people are so fortified in their positions that I don’t think anything will change, haha.
Nope, that’s why Windows rules the world unlike Linux which rules only the minds of the most devoted fans.
A. You are plain wrong and/or knowingly spreading misinformation. Statically linked binaries will work. I have ~18+ y/o binaries that work. (Just tool a binary from an old RH9 VM and ran it on my Fedora, see below (1)).
B. Define irony: The siminfomration you just posted traveled through at least 5-10 Linux based devices and servers.
(1) RH9 binaries:
$ ssh -Y XXXXX
gilboa@XXXX’s password:
[gilboa@XXXX gilboa]$ cd ~/work/Test/ && cat test.c
#include
#include
int main(int argc, char *argv[])
{
printf(“Hello world!\n”);
}
[gilboa@XXXX gilboa]$ gcc ./test.c
[gilboa@XXXX gilboa]$ uname -a
Linux XXXX 2.4.20-8smp #1 SMP Thu Mar 13 17:45:54 EST 2003 i686 i686 i386 GNU/Linux
[gilboa@XXXX gilboa]$ exit
gilboa ~ scp gilboa@XXXX:work/Test/a.out .
gilboa@XXX’s password:
a.out 100% 11KB 253.9KB/s 00:00
gilboa ~ ./a.out
Hello world!
gilboa ~ uname -a
Linux YYYY 5.13.7-200.fc34.x86_64 #1 SMP Sat Jul 31 14:10:16 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
99.99% of people out there never seen or used console applications.
Secondly, this topic is about dynamically linked GUI applications and your argument is “statically linked console applications still work”? Wow.
Maybe you need to go back to school and revise basic concepts in logic and dispute because you’ve failed at it spectacularly.
“Plain wrong or spreading misinformation”. Wow. A person who completely subverts the discussion with inapplicable arguments.
Given the fact that you were caught red-handed spreading misinformation once (and I’ve being polite) I would advise against trying to send me (or anyone else) to coding school.
You have no idea who am I, what are my credentials and what I do for a living.
More-ever, I managed run **dynamically linked** binaries taken from the same RH9 installation. It simply requires far more effort (when it comes to create the same library space).
Beyond, if you have taken the time to actually write production code on both Linux and Windows (preferably, cross platform code), as opposed to trying to teach others (…) you would have known that Linux user mode ABI is just as stable as Windows user mode ABI.
That said, I would concede that Linux mode ABI is far more fluid, and I’m forced to spend far more time maintaining my Linux kernel code, than my Windows kernel code.
… And caught red-handed spreading misinformation for the second time.
gilboa $ uname -a
Linux gilboa-home-dev 5.13.7-200.fc34.x86_64 #1 SMP Sat Jul 31 14:10:16 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
gilboa $ LD_LIBRARY_PATH=../lib/:../../lib/ ./xlogo
Warning: Cannot convert string “xlogo32” to type Pixmap
^C
gilboa $ file ./xlogo
./xlogo: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.2.5, stripped
gilboa $ ls -lh ./xlogo
-rwxr-xr-x. 1 gilboa gilboa 13K Feb 27 2003 ./xlogo
… Had enough?
Steps to reproduce:
1. Download the following RPMs:
expat-1.95.5-2.i386.rpm
XFree86-libs-4.3.0-2.i386.rpm
XFree86-tools-4.3.0-2.i386.rpm
2. Unpack them into a directory.
3. Locate the bin directory.
cd $ROOT_PATH/usr/X11R6/bin
4. Execute the binary with modified library path:
LD_LIBRARY_PATH=../lib/:../../lib/ ./xlogo
xlogo?
LMAO!!
You’re so freaking funny. Out of thousands of full featured GUI application you chose the one which is linked to glibc and libX11. You absolutely need to write more. Your argumentation skills are null and void.
So.
I proved that I can run statically linked console application, you changed the goal post claiming it proves nothing. The discussion was about dynamically linked GUI applications.
I proved that I can (easily) run dynamically linked GUI applications, you changed the goal post once again claiming that my GUI application was not GUI enough…
I can waste ~20 minutes and get say, GTK application running, but than you’ll move the goal post once again claiming you were talking about 3D OpenGL applications, rinse and repeat.
If I post a screen + steps required to get a vim-X11, would you admit defeat, or will continue behaving like a 5 y/o on a rant?
OK. To finish the public humiliation:
1. Download and unpack (into $UNPACK_ROOT) the following RPMs:
abiword-1.0.4-2.i386.rpm glib-1.2.10-10.i386.rpm libtool-libs13-1.3.5-7.i386.rpm XFree86-libs-4.3.0-2.i386.rpm
aspell-0.33.7.1-21.i386.rpm gtk+-1.2.10-25.i386.rpm libtool-libs-1.4.3-5.i386.rpm XFree86-tools-4.3.0-2.i386.rpm
expat-1.95.5-2.i386.rpm libstdc++-3.2.2-5.i386.rpm pspell-0.12.2-16.i386.rpm
2. Locate and edit Abiword script under $UNPACK_ROOT/usr/bin
3. Make the following changes to the script:
$ diff AbiWord.new AbiWord
8c8
ABISUITE_HOME=$UNPACK_ROOT/usr/share/AbiSuite
12c12,13
export LD_LIBRARY_PATH=$UNPACK_ROOT/usr/lib/:$UNPACK_ROOT/usr/X11R6/lib/
> ABISUITE_LIBEXEC=$UNPACK_ROOT/usr/lib/AbiSuite/bin
4. Execute Abiword (from close to 19 years ago….)
$ UNPACK_ROOT=~/Download/RedHat/open/ ./AbiWord.new
Gdk-WARNING **: locale not supported by Xlib, locale set to C
Let me guess… you didn’t mean Fully Functional Word Processor ™ from 18 years ago, you meant (enter pathetic excuse number 4…)
When your grandma or the average iOS/Android user can do what you’re now doing, let’s get back to your “great compatibility” crap which involves vary narrow skills in downloading packages from the net and satisfying dependencies manually. And then you’re choosing convenient apps for which dependencies can actually be satisfied without breaking the rest of the system and sorry to break it to you it’s not always possible.
You’re humiliating yourself, only you don’t see it. Windows doesn’t require any of this crap to run apps from 26 years ago. Yes, not all apps from that age will run (specially games which were hardcoded against things which no longer exist) but lots of them will.
Also check this for a beautiful excourse into Linux “compatibility”:
https://blogg.forteller.net/2016/humble-test/
/Thread and GTFO.
Artem S. Tashkinov,
It is tricky to manage dependencies manually, but the truth is that the vast majority of linux users never have to do it anyways manually because they use the centralized repo to automate it. Beyond that distros have made progress with things like snap packages to include self contained dependencies, much like windows applications do.
There are pros and cons to everything including window’s predominant setup.exe approach. Arguably the maintained repo approach used by linux distros years ago is easier & safer for most users. And not for nothing but It’s one of the features that windows has taken from linux to simplify installation on windows. I’m not saying that to gloat, I regularly concede that Linux has it’s share of problems, but it seems to me that a lot of your arguments are pushing one sided linux hate over more balanced objectivity.
@Alfman
Sorry but Artem is correct on ease of use versus skillsets. It is simply a fact that you can take an out of the box experience, one click patch to a support level, and install a Windows app alost completely back to Windows whatever. I forget the cut off date and what the criteria are but we’re into Win32s/Windows 1.0 territory.
Microsoft back all the skillsets into one click. They do your thinking for you so you do not need technical skills or arcane knowledge or, and this is one of Linux’s worst issues, you don’t have to deal with obsessives with social problems.
Until today I did not know what a Snap package is. Why in god’s name can’t someone just describe it as a one click install. What is it with NIH and wanting to invent some jargon you have to be a mind reader to understand. All I have heard from the Linux crowd about Snap packages so far is a wall of politics and technocratic in-crowd babble so some inadequate edgelord playing office politics can flaunt their superiority. And Linux officanados wonder why the uptake of Linux is so poor. This problem impacts Linux at every single point of contact. It’s like dipping your toe in acid.
I also saw what you did with the argument on Windows store. You reframed away from the issue. Windows Store is not copying Linux. They’re copying Apple and not even copying Apple really. They just want your money. All Apple did is actually copy themselves. They copied iTunes but made it for applications. iTunes wasn’t the first music store either.
Some of the biggest mistakes Microsoft made are to sack their quality assurance and technical writer and driver bug response team. This all happened under Balmer and some of his careerist nod alongs biding their time. Microsoft now really only care about big business and corporate lock in and the cloud and clinging on to their monopoly with backhander deals with intelligence agencies. Ditto Apple we suspect. But I digress.
Over the years industries have grown up around Microsoft to provide documentation and training for none technical users. In contrast the Linux crowd make life impossible for people with skillsets and who are adept at technology from making a smooth transfer. This has gone on for long enough now I feel it is deliberate. It’s an artificial filter put in place to trip people up and keep the gurus on top. It’s a filter. It’s hazing.
All of the above is pretty much why I am not using Linux at the moment. It’s not because I’m stupid or technically illiterate or cannot be bothered to make an effort. It’s just once you get over the very very narrow consumerised experience you fall off a cliff and get smashed on the rocks before drowning. I know you lot have to make money or have status or whatever but can the Linux crowd please do this without beating up the end user?Stop making excuses. You want to be a viable replacement and have mass market appeal or you don’t.
HollyB,
I agree that managing dependencies manually is a usability disaster. However that needs to be contextualized as the vast majority of linux users no longer do that since they can rely on their distro’s repositories or snap & flatpak, which are arguably as easy as android and IOS without exaggeration.
I suspect a lot of the criticism stems from historical reputation and the fact that many software publishers, particularly commercial ones, don’t support linux. For better or worse traditionally linux is the goto OS for FOSS software and windows is the goto OS for commercial software. But this has much more to do with marketing than anything intrinsic to linux. I commend companies like valve/steam for doing an extraordinary job making linux more accessible for the masses. It’s admittedly still niche, but it really does make it easier.
Canonical built it. If you install Ubuntu it comes with the OS out of the box. Otherwise you can likely install it from your distro’s repositories.
https://snapcraft.io/
https://flathub.org/
Regarding branding, I hear what you’re saying, but operating systems risk trademark infringement. Apple used to say it owned the word “app” even. Google using “play store” isn’t really my favorite, but that’s what they called it. Android forks like LineageOS usually preinstall “f-droid”, which is extremely handy but most outsiders will have never heard of it either. Snap could have been called something like “Ubuntu Packages” or “Ubuntu Store”, etc, but that makes it sound like it’s only for Ubuntu. “Linux Store” may have trademark issues. All and all I appreciate where you’re coming from, but I don’t ask questions as to why things are named the way they are 🙂
IMHO most users keep using what they know and it takes a particularly unpleasant event to convince them to change their ways. Personally I couldn’t stand vendors like microsoft and apple treating my computer like they own it. I’d probably still be on windows if microsoft stopped going on power trips with their users, but I saw this wasn’t going to get better, so I decided to jump ship for linux even though it wasn’t all smooth sailing.
Not to make too big a deal of it, but clearly both microsoft and apple copied elements from elsewhere including linux. Heck a lot of these stores are reminiscent of Sun Java Webstart’s one click install, which came out in 2001. Amusingly Apple didn’t have the first app store on the iphone, that goes to Cydia. The point being, everyone copies a little bit from everyone else and it’s not necessarily a bad thing if it improves along the way.
Yes, of course.
I don’t deny it, some linux circles have an attitude problem and uncomfortably some of that came from Torvalds himself, although he has confessed for his hostilities. I’d like to see the culture become more rounded and inclusive. There’s a lot of work to be done, but this “better than you” attitude is something I’m trying to do my part to change in the community. While I may not have much pull, I think there are plenty of us who are like minded.
I’m not the evangelical type, it’s not my goal to sell anyone on linux who doesn’t want it and I don’t recommend linux for everyone. If you have pre-existing software that needs to work or want things to work the same way the current OS works, then why change? That said, if you are interested in gaining more linux experience and you’ve got some time and a spare machine, I think running it for two weeks strait gives more experience than years of side-tinkering does.
A note about hardware: While linux works on a surprising amount of hardware, unfortunately the vast majority of manufacturers will not provide official support, so for it to work on random hardware you’ve got lying about is hit or miss. I would recommend trying a live-cd or thumb disk just to see if linux will run. those of us who use linux regularly try to explicitly buy compatible hardware from the start (but even that can be tough since manufacturers can switch chips without notice). Ideally you would buy hardware from a linux vendor and let them take care of components, but I appreciate that this isn’t a financially viable option for many consumers, so I would just give it a try on whatever hardware you’ve got around.
Lastly static linking is extremely bad and must be avoided if possible:
http://web.archive.org/web/20210611023053/https://akkadia.org/drepper/no_static_linking.html
“Date Registered: 2005-07-06”
Wow, you must be at the very least 30 years of age.
Artem S. Tashkinov,
Linux userspace ABI is actually more stable than you give it credit for. It’s the userspace dependencies that change. Also, I’ve seen plenty of windows programs break over the years after upgrading windows. Backwards compatibility has been pretty good, but certainly not perfect. Many drivers have broken over the years, but even if we exclude those I’ve also seen normal apps break too. It happens.
That’s one opinion, but there are varying considerations depending on circumstances. Sometimes the problems associated with dependency hell are worse than the cons of static binaries. In fact in order to mitigate this very problem some package managers are using shared libraries in a non-shared way, which works but often results in shared libraries having less of a resource advantage. Static binaries can give you dead code elimination. In theory, a build system could contextually optimize static binaries even further than shared libraries (although in practice I’m not aware of any build systems that do this).
So all and all I’m not against shared libraries, but I do think the badness of static binaries is somewhat exaggerated.
@Alfman
Tell me more. Specially try to compile an app in your shiny Ubuntu 20.10 and try to run it in Ubuntu 16.10. Who the f* cares about ABIs when APIs break? Well, you can perfectly compile an app in Windows 10 and run it in Windows 7 or even XP.
This topic is not about f*ing statically linked apps. Period. Statically linked apps are in essence almost complete operating systems.
Again this article does talk about dynamically linked GUI applications from ages ago which continue to work, a concept which is alien to Linux whose adepts insist on open sourcing everything or it must not exist. Then you ask them who the f* will maintain, compile and package all these open source applications and they shut the f* up. Tons of GTK1/2, KDE/2/3 applications cannot run in modern Linux’es by most normal people.
Linux on the desktop is a failure as an OS, period. It’s never worked, it’ll never work unless someone steps up and does something about this zoo of interdependencies and an almost complete disregard of forward and backward compatibility.
Artem S. Tashkinov,
You’re right that new dependencies may not work with older operating systems, but we need to be clear that this is the exact reverse of what everyone else is talking about. Running yesterday’s applications/APIs on today’s operating systems is not the same thing as running modern applications/APIs on yesterday’s operating systems.
Wither it’s windows or linux it has practically nothing to do with the OS you use to build, but rather the build environment you are using to build with. Even in windows land, the newest versions of visual studio output code that will not run on older operating systems by default! This is by design on microsoft’s part because they don’t want new software to run on older versions of windows, even if there’s no technical reason it shouldn’t. If you’re working with older software it’s probably best not to upgrade visual studio for compatibility’s sake. As a developer, I’ve been hit by visual studio’s project upgrade wizard taking a code base designed back in the 90s and generate new binaries that are incompatible with older versions of windows. They do this in order to convince your users to upgrade windows. If you’ve got multiple versions of visual studio, you should give it a try some time.
No need to curse so much, I’m just saying it depends on the circumstances. Static linking can make things easier if you want a binary that’s more portable between distros without worrying about local dependencies. Snap packages might be a good alternative though.
Even in terms of GUI applications, it’s not that big a deal to have a static application use the X protocol, especially if you’re using mostly trivial wrappers Take a look at FLTK if you’re interested. On the other hand if you’re using a huge bloated library then of course that will favor shared libraries. People can do what works for them.
To be fair though this problem isn’t unique to linux. There have been many times when I’ve been prompted about a missing DLL when running windows programs. Unless your application bundles it’s own dependencies, DLL hell has been a long standing problem on windows too. In fact I’d even say DLL hell was downright atrocious during it’s active-x years. Thank god that fell out of favor.
@Alfrman
DLL hell was solved with Windows Vista. For the past 15 years of using Windows I’ve had zero issues with DLLs. Missing DLLs? Yes, in very rare cases some applications don’t bundle MS Visual C runtimes they were compiled with. It can be solved in under 5 minutes without breaking your system.
This conversation is a disaster. Windows does not not have a compatibility issue with very rare exceptions, Linux doesn’t give a f* about compatibility with very rare exceptions. The two quite opposite concepts if you ask me yet Linux fans in the topic try to prove me wrong by posting some utter nonsense. Of course @gilboa is on a whole different level of BS’ing. He started with console apps, now he’s moved on to GUI “apps” linked to glibc and X11.
Again the article is about a full featured graphical application written for win32s which continues to work in Windows 11 26 years later or more.
Artem S. Tashkinov
That’s just not true. Well, forwards compatibility is generally pretty good, but you’re the one who specifically said “Well, you can perfectly compile an app in Windows 10 and run it in Windows 7 or even XP.”.
https://stackoverflow.com/questions/35664861/how-to-target-windows-xp-in-microsoft-visual-studio-c
Several years ago we encountered the exact same issue when a customer reported the incompatibility to us. Unbeknownst to us, new versions of visual studio broke backwards compatibility such that windows actually says “xx.exe is not a valid win32 application”…great job microsoft /sarcasm.
You can add flags to build binaries that are compatible with windows xp, but then it still introduces new missing DLL issues, pretty much as reported in the link. Of course these are all solvable, as a developer we have to take care to deploy missing dependencies so that users don’t see that, but at the same time these new VS induced incompatibilities did not happen on our own machines and we only when customers started complaining about them on their machines.
The latest versions of visual studio include compilers than cannot generate backwards compatible code at all, per microsoft themselves you have to actually install and configure VS to use older MS compilers to build backwards compatible code…
https://docs.microsoft.com/en-us/cpp/porting/features-deprecated-in-visual-studio?view=msvc-160
https://docs.microsoft.com/en-us/cpp/build/configuring-programs-for-windows-xp?view=msvc-160
On the one hand, the platforms are officially unsupported, so maybe it’s to be expected. But on the other hand it seems like MS is using it’s control over development tools to make backwards compatibility harder than necessary.
Believe it or not I’m actually ok with those who suggest we need to use old compilers/tools to target old windows versions, but I’m not ok with a double standard for linux. Criticizing linux for something while glossing over the same thing on windows is very biased.
@Alfman
My portability layer covered not just every version of Windows but compilers too as well as ancient compilers by Watcom and Borland, and game consoles and Linux, and 32 and 64 bit. It was quite a small file in the scheme of things. It’s been years since I used it so my memory is very foggy but it was pretty much all covered by flags so you didn’t have to think about it. All I had to do was set flags for what I wanted to compile to and the rest happened.
I also tested code by compiling on multiple compilers. It’s a no-brainer to have multiple OS installs or VM installs archived. I also tested graphics API across multiple vendors. Yes I even had an OpenGL compatability layer which dealt with vendor quirks and included a file for OpenGL on 3Dfx Voodoo.
The reason why I caught problems is because all of this was in place and I tested stuff. VS 2005 didn’t catch me out because I already knew about it.
Myself I feel a portability layer should be the first thing anyone includes and then use it. It really should be included with every compiler or SDK. As you note Microsoft have no interest in this. It’s also one reason why they set out to destroy Borland. If they can dominate the toolchain they can maintain their monopoly.
HollyB,
On the one hand it’s not a big deal to support modest portability standards. This is why linux on ARM goes so smoothly (in terms of software support, kernel support is a different matter). But on the other hand not everyone has the resources to actually aquire and test all the hardware, operating systems, compilers licenses, etc. So for better or worse it may be the end users testing a combination for the first time.
With proprietary projects, usually nobody else is touching the code. So it may not be worth testing other compilers. We do the best we can, haha.
On linux the GCC compiler has been very robust for me, some other build tools have given me far more grief though, there are just so damn many of them with plethora of versions in the wild. Take cmake., which exists to modernize make & autogen, but unfortunately too many times I come across software that I cannot build right away because it depends on a newer version of cmake than what my distro uses. Ugh!
I understand. Ideally everyone would do that. The new version of VS compiled the software ok and you wouldn’t know anything was wrong unless you ran the binary on unsupported versions of windows. You would catch it if you have unsupported operating systems in rotation, but this particular shop did not. And personally I don’t have licenses to legally run older versions of windows in a VM.
I really liked Borland’s tools back in the day, far cleaner than microsoft’s. Alas, microsoft was the one with the monopoly 🙁
@Alfman
I don’t code now so it’s someone else’s problem. As it happens I did a clear out a few weeks ago and just hit delete on the remainder of my code libraries. I did have a look and my portability layer was in there. Ooh the work that took and I hit delete on it. It was out of date by a few OS and compiler versions but even so. It hurt but sometimes you have to let go of stuff.
One thing I forgot to mention is you could stack Windows SDK’s so compile against different SDK’s. I can’t remember if I did or didn’t but think I had some SDK checks in my portability layer too. I have no idea how Linux works but you could put versioning stuff in a portability layer too, I guess.
I’d have to test this to be sure but Windows SDK versions pretty much tracked compiler and OS versions. You could easily use an older compiler with a newer SDK within reason and an older SDK too, again, within reason.
I tested against GCC but never used it. There wasn’t much between the compilers at the time but for maths stuff Intel had a percentage lead even after you dodged their dirty tricks although this did evaporate as time went on. VS6 was pretty good as was Borland. Hardly anyone would know what the executable was compiled in but if you’re pushing it so a couple of percentage points means the difference between a stable frame rate or drop outs?
One tool I did like was Intel’s OpenGL analyzer. You could attach it to a binary and when you ran it do a dump of all the OpenGL calls it made among other things. It’s perfectly possible to write your own. I didn’t learn a lot from using it but it was a lazy way of having a nose or gathering timings.
@Alfman
Of course to successfully compile an application in Windows 10 so that it could run in Windows XP you need to use special flags and maybe a certain version of VS compiler but it’s perfectly possible.
It’s perfectly impossible in Linux because once you link to a newer version of glibc/libstdc++ you cannot run your application in an older Linux distro, period. While e.g. libstdc++ can be overridden using LD_PRELOAD flags, you cannot do that for glibc for some reasons (I don’t know why).
TLDR: Microsoft does care about forward and backward compatibility, for Linux it’s something no one gives a damn about, again for Linux fans, “It’s either open source, or GTFO”. Again, like I’ve mentioned in this thread when you ask them who’s going to maintain, compile and package all the open source applications they shut up momentarily.
When you ask them how can I run this GTK1/2 KDE1/2/3 SDL1 application in Fedora 34, they also shut up.
The situation with compatibility in Linux is truly horrible. Not only APIs/ABIs break often, Linux for its 30 year history has replaced many core components which rendered older applications broken.
Now we’re transitioning from X11 to Wayland which again renders a ton of applications useless.
Artem S. Tashkinov,
Sure I agree that using the old compiler and libraries will work. The point was about using the latest and greatest, which won’t necessarily produce backwards compatible software.
Of course it’s possible, you just have to install and use all the necessary dependencies by hand. I’ll concede it’s tricky and a normal user wouldn’t figure it out, but saying it’s impossible is wrong.
It’s not sufficient because GNU’s libc is also dependent on ld-linux, which is another dependency that you have to consider. But once you know that it’s completely possible to run ancient binaries, including graphical X programs. Here’s a working example of running xfig from centos 3 on debian 10.
https://ibb.co/m0qVRRg
Without any help, getting the right versions of the necessary dependencies can be a real pain since the repos don’t help here. But once you have the necessary dependencies running it is quite easy (not obvious, but easy):
Linux kernel userspace stability is actually very good, but like I’ve been saying it all comes down to dependency management. As you no doubt realize, doing it by hand is a pain.
Well IMHO some of your assertions are flawed, but regardless I think your frustration is genuine and I can sympathize. You’re right about there being a FOSS bias in the linux community. I personally have that bias as well. It’s my negative experiences with proprietary commercial vendors that lead me to prefer FOSS. Having experienced the full potential of things like open NAS/routers/etc, I never want to go back to proprietary solutions. Alas, many consumers either don’t care or find the path too difficult because their vendors don’t support open source, which can be a problem.
Distros have been doing a pretty good job of it IMHO.
As gilboa and I have been demonstrating, it’s a matter of getting the dependencies right, but I also think your missing a hugely important point: even in the commercial domain you aren’t going to get support for ancient unsupported software. They’re going to tell you to use the latest version, why do you expect FOSS to be any different? It sounds like you’re trying to apply two standards.
I concede once again that manual dependency resolution is very tedious, but in fairness this does not impact the vast majority of linux users who are using package managers to update their systems. I can’t remember the last time I needed to run ancient binaries graphical or otherwise. Keep in mind that running old binaries on windows is more important because windows has lacked package management for most of it’s history and it’s more likely for users to be running out of date binaries. It’s not inconceivable that you might need to run ancient binaries on linux, but it’s more niche (although let me know if you think there’s a widespread use case).
Maybe someone should build a tool that can install ancient dependencies automatically, but it would be a very niche tool that would need to index terabytes of ancient unsupported libraries. Does windows have such a tool? It could be useful there as well for those missing DLL errors.
Gripes aside, that’s really not true at all. A lot of users are already running X applications on wayland.
This is not compatibility, period. You’re mixing stuff up.
Compatibility is when something works out of the box for the average Joe. Running binaries via ./ld-linux (wow, I didn’t even know it was possible in my +20 years of using Linux) is a dirty hack which requires a ton of expertise.
Well, I love knetstats. No one wants to port it to KDE5. I don’t want to go through the circles of hell to be able to run it in my Fedora 34. This tiny simple application has zero alternatives. Lastly even if you hack it to run it under Fedora 34, I don’t want to load tens of megabytes of libraries and wait a few seconds.
Such an issue doesn’t exist for stupid Win32 applications a majority of which continue to run to this day.
Artem S. Tashkinov,
Most people would disagree with your view that a missing DLL or library is the same thing as an outright incompatibility. I think you’re just posturing and that you wouldn’t actually tell a fellow windows user their software is incompatible over a missing DLL.
I didn’t expect you nor an average joe to figure that out, I’m glad you learned something though, that’s why it helps to ask 🙂
On windows I used to try to backup & copy “programs files” directories, but over the years this has gotten increasingly futile due to system dependencies and things being hidden in the registry. Consequently I’ve learned that trying to migrate naked binaries from one system to another is the wrong approach. You could go hunting down the dependencies manually, and trusting the shady sites that provide them, but I’d advise people just to use the proper installation method as intended (be it a windows installer or linux repos).
As I’m not seeing my response due to the forum’s weird handling of long threads, I’m copying my response (with some minor fixes):
“Artem S. Tashkinov
xlogo?
LMAO!!
….”
OK. To finish the public humiliation.
The following can be automated using a simple bash script.
1. Download and unpack (into $UNPACK_ROOT) the following RH9 RPMs from RedHat:
(http://archive.download.redhat.com/pub/redhat/linux/9/en/os/i386/RedHat/RPMS/)
abiword-1.0.4-2.i386.rpm glib-1.2.10-10.i386.rpm libtool-libs13-1.3.5-7.i386.rpm XFree86-libs-4.3.0-2.i386.rpm
aspell-0.33.7.1-21.i386.rpm gtk+-1.2.10-25.i386.rpm libtool-libs-1.4.3-5.i386.rpm XFree86-tools-4.3.0-2.i386.rpm
expat-1.95.5-2.i386.rpm libstdc++-3.2.2-5.i386.rpm pspell-0.12.2-16.i386.rpm
2. Locate and edit Abiword script under $UNPACK_ROOT/usr/bin
3. Make the following changes to the script:
$ diff AbiWord.new AbiWord
8c8
ABISUITE_HOME=$UNPACK_ROOT/usr/share/AbiSuite
12c12,13
export LD_LIBRARY_PATH=$UNPACK_ROOT/usr/lib/:$UNPACK_ROOT/usr/X11R6/lib/
> ABISUITE_LIBEXEC=$UNPACK_ROOT/usr/lib/AbiSuite/bin
4. Execute Abiword (from close to 19 years ago….)
$ UNPACK_ROOT=~/Download/RedHat/open/ ./AbiWord.new
Gdk-WARNING **: locale not supported by Xlib, locale set to C
Let me guess… you didn’t mean Fully Functional Word Processor ™ from 18 years ago, you meant (enter pathetic excuse number 4…)
Download this or that, run cryptic console commands, etc. etc. etc. etc. 99.9% of users will never be able to do that but who cares? It won’t always work however since LD_PRELOAD won’t work in certain cases but funny gilboa, spreading crap, doesn’t care because he meticulously chooses the right applications to show “great” compatibility in Linux.
Windows: download an application from Windows 95 (granted it has a 32bit installer which wasn’t always the case), install it and work with in Windows 11 64 in 2021.
Office 2000 from 2000 works just fine in Windows 10 in 2020. Without f*ing hacks, crap and cryptic console commands. Youtube has a ton of clips to show it.
https://www.youtube.com/watch?v=rvJPALX0qfo
Continue to embarrass yourself. You’re perfect at it.
So, as expected:
First it was Linux cannot run old applications.
Then it was Linux cannot run old dynamically linked GUI applications.
Than it was Linux cannot run old *REAL* dynamically linked GUI applications.
Now the fully featured word processor I chosen is not fully featured enough and/or sufficiently random enough for you.
Rinse and repeat.
On a private note, had it been me, I’d simply concede that my original comment was plain stupid (God knows I made my share of mistakes in the past), give some off-hand apology for the exchange, blame everything on having a bad hair day, and leave with some semblance of self respect.
… But that’s me.
Have a good day.
It’s not about that you cannot technically do it, you can always use an emulator for that. It’s about installing an old binary package without doing any further change to your system or just running a damn old executable as-is. Like you find an old setup on a cdr or a zip file in an old email. On Windows it pretty much works out of the box (minor some old runtime dependencies you can easily find and install).
Don’t make it like you didn’t understood the initial request.
gilboa implies/believes that when you can hack compatibility then it surely exists.
Basically he completely misunderstands and misuses the word.
Compatibility means something works out of the box with no hacks/workarounds applied.
I can imagine you will next show how to run apps from chroot/docker, i.e. by installing a whole f*ing OS just to satisfy dependencies. You’re so good at BS’ing.
Windows: install and run (not always the case but absolute most Win32 apps written in the past 25 years will run on Windows 11 64 without any tricks or modifications).
Artem S. Tashkinov,
Well obviously you always need to have the dependencies in order for software to run, it’s only a question of how those dependencies are installed.
1. They may be bundled with the OS.
2. They may be installed automatically by a package manager.
3. They may be bundled with the software itself.
4. Failing all of these the user will end up having to fix it manually. I believe this pretty much sums it up the dependency solutions for all operating systems, no? There are pros and cons to every approach.
You say this, but do you have any credible data sources or is it just anecdotal evidence? Anecdotally I’ve seen lots of breakages (due to aforementioned DLL/OCX hell) but it could be because I installed so much darned stuff 🙂
Something else that we should mention is that most linux software is open source that distros can recompile as necessary, which significantly decreases the need to run ancient binaries at all. Granted not everyone has embraced open source, but it makes a big difference compared to windows where typical users are almost 100% dependent on proprietary software.
@Alfman
Being awkward – Windows SDK contained sourcecode for runtimes. You couldn’t compile and distribute it so it landed in the systems directory but you could meddle and build your own if you wanted to.
A few things are broken in later Windows as Microsoft dumped them or licenses expired such as with some Intel codecs. I never used them myself as I had a severe dose of my own NIH plus I preferred open standards. Although I did have a copy of Intel’s JPEG library which was fast compared to others they stopped distributing it so if you didn’t have a copy to use you were stuffed. The JPEG consortium reference library was pretty much good enough. The usual gotches were video or audio libraries. I think a few games had problems when they open sourced especially with audio libraries as the companies had usually gone bust by then.
It seems it’s usually third party libraries which are the problem.
Artem,
You are absolutely right except in one case:
Older Installshield packages that used 16-bit binaries will not run on 64-bit Windows. That is vanishingly small even now. Even MSI packages from 2000 will run. Those can be hacked to run.
I can assure you that I have done this on Windows 10 x64 many times. I can also tell you that getting anything done with running older GUI applications under Linux has been an exercise in futility.
mbpark,
Your phrasing makes it sound like there’s only one exception, however I’ve seen a lot of applications stop working even those that are 32bit. Of course the vanilla win32 APIs are pretty safe and if we limit our scope to that then I’d be more inclined to agree. But for better worse worse some applications use weird dependencies and even DRM that breaks over time. Some of the VB applications that seemed innocent enough at the time used OCX libraries that would not register&load successfully on new operating systems. I used to work at an industrial automation company and none of that old windows software works on modern versions of windows because of the OCX issue. I wrote a game that used an intel codec for video playback and the last operating system it worked on was win98, the subsequent versions of windows all reported a codec licensing error. Another application I still maintain code customers is an old MFC application, and one part of it froze consistently on windows 10, whereas it had worked for decades before. We debugged it and the freezing was occurring inside MFC code. Our solution was to reimpliment the code to avoid the faulty path so that it would work on windows 10. Some applications depended on IE & activeX technology under the hood, which hasn’t aged as well as the rest of the win32s. Some of our customer management tools stopped working with windows 8. Since the software was supported we got an update and went on with our lives, but the point remains that the binary broke.
Obviously many applications still do work fine, but we should refrain from suggesting forward compatibility for 32bit windows software is flawless because it isn’t.
@mbpark
Thank you, really!
One of the vanishingly rational voices here.
@Alfman
I’ve never claimed it’s flawless but Linux on the other hand has basically no implied or guaranteed backward or forward compatibility whatsoever. This is quite a difference vs Windows where pretty much all applications released in the last 15 years will run flawlessly in Windows 11 64.
Artem S. Tashkinov,
That’s very misleading though because linux has very good backwards compatibility if you install all the dependencies for it to work. The main difference stems from how dependencies are distributed. If you distribute binaries without dependencies that is a problem, but that’s not how most linux distros work in general.
IMHO you need to stop saying it’s a compatibility problem and acknowledge it’s a dependency problem. Let’s flip the tables for a moment: would you be ok if I said a program was “incompatible” with windows XYZ just because it outputs a missing DLL error on windows XYZ? No of course you wouldn’t, you would rightfully say it’s not a compatibility problem, but a dependency problem.
What happened to “I’ve never claimed it’s flawless”? Haha.
We should agree it’s not flawless and leave it at that.
FFS, that’s not how compatibility works. If you need to install this or that or that and use LD_PRELOAD – that is no longer compatibility – these are hacks.
Artem S. Tashkinov,
So then by the same logic you’d have to say windows is incompatible with software if it’s missing a DLL. Got it. That’s not what I think but if that’s what you want to go with then so be it.
@Alfman
Missing DLLs? Haven’t seen this error for the past 20 years even once. Keep on lying and making things up.
DLL hell was more or less completely resolved with Windows Vista in 2007, i.e 14 years ago. Linux has nothing similar in comparison by a long shot. Hacking shared libraries and running cryptic commands is not compatibility.
Artem S. Tashkinov,
Just because you claim to not have seen it doesn’t mean others haven’t. Sheesh.
Yes managing dependencies manually sucks, but you are using a completely double standard and I’m just pointing it out.
Your argument is that missing dependencies == incompatibility, then that should apply to windows as well.
Your argument is that DLL errors never happen in windows (I disagree with you, but whatever). Then you should acknowledge that typical linux users are using a repository or snap package and generally never see missing libraries either.
I would agree the methods of software distribution are different and there are pros and cons for each, but your double standards show some glaring biases. I concur with javiercero1: you’ve been moving the goalposts every which way to try and cover your hypocrisy.
Android says hi.
Android compatibility so far has been great but not stellar.
Lots of applications written before Android 5.0 are barely functional or plain don’t work in Android 11.
I installed Android 2 era APKs on an Android 6 without problem, so thumb up for now.
That’s what happens when you depend on external cloud services that get shut off. Binaries will (mostly) work, however if you depend on an external web service, you’re usually out of luck because few companies keep them around for that long. Google’s about to shut off authentication for 2.3.7 and below if they have not already! Someone turned on one of the first Android phones a few years ago for one of the tech sites and almost nothing that required Google services worked.
mbpark,
A bit tangential to the thread, but relevant to your comment:
I got a new Holdpeak AiLink multimeter and paid a few bucks more for the bluetooth option, figuring it might come in handy some day. It allows you to connect the multimeter and stream the output on your phone, cool. But I didn’t do my research and thought nothing of how terrible the app would be. First of all, it’s a huge 50MB apk, wtf? Whatever…but I was shocked that it wouldn’t simply run locally, it needs to be online and you need to login to a damn account in order to use your multimeter’s bluetooth option. Furthermore the app demands access to your GPS location otherwise it won’t start. The multimeter that I paid good money for forces me to infest my phone with spyware. WTF! This is completely unacceptable and I won’t be using the bluetooth option at all.
I’m a fan of smart devices, at least in theory, but the big problem is that I feel there’s a lot of bait and switch going on. The consumer buys a smart device thinking they’ll program it directly from their phones, but then you end up going through some BS middleman that can shut you down at any time. Like the revolv home automation devices that google officially bricked. I’ve been looking for remote controlled outlets, plenty exist on the market and they’re all advertised as “alexa compatible” or whatnot, but almost none of them give any lip service to whether you can disable alexa and access them directly. Smart tech doesn’t need to be this way, it’s not a good idea for privacy, control, or even reliability, but as a consumer the options for buying smart yet untethered devices are getting slimmer by the year.
The point being that the most popular operating system is based on Linux, aka. linux now rules the world.
@javiercero1
The point is
1) Android contains the Linux kernel which Google intends to replace with Zircon
2) Android may work with any other kernel as well
3) 99.99% of users have no idea what their Android phone actually runs
IOW (the) Linux (kernel) is irrelevant (for Android). There’s only one place where Linux is truly important and irreplaceable: supercomputers and application servers (that’s doubtful as well as FreeBSD is in many ways more preferable and Linux is only popular because it’s garnered momentum). Everywhere else? Only fanboys care.
@Artem S. Tashkinov
You can move the goalposts all you want. The only thing that Windows rules is the desktop, for everything else Linux is the dominant player. So technically, Linux rules the world.
@javiercero1
I’m moving goalposts?
Well, we’ll see soon enough how Google dumps the Linux kernel and suddenly there’s no Linux being the most popular OS in the world. Lastly, Android is not Linux. Having the Linux kernel under the hood doesn’t make Android Linux. There’s Debian/HURD – notice how there’s no Linux at all its name.
Lastly Android doesn’t use almost anything from the GNU software stack.
You’re way over your head if you believe Google gives two shits about the Linux kernel. If anything they actively hate it because it makes supporting devices in the long run near impossible because the Linux kernel doesn’t support any long term ABI/API compatibility which is why Zircon was created in the first place.
You can be an idealist and expect companies to release their devices drivers source code, only they haven’t done that and they will continue not to do it because drivers are often what gives them a competitive advantage.
I think this article is a bit misleading. Note this is a 32-bit binary, not a 16-bit Windows 3.1 binary. It is possible to design a 32 bit binary for Windows 3.1, but they work via a compatibility layer (Win32s) – the binary itself is effectively an NT binary, which is why it works today. A true 16 bit Windows 3.1 binary would need either a 32 bit version of Windows, or something like winevdm.
It’s legitimate. My portability layer went all the way back to Win32s. Even with Windows 95 there is stuff retrofitted with patches. I could compile stuff for anything (including game consoles) or handle 32 or 64 bit architectures just with setting flags, and it also covered the quirks of different compilers and OS. Sometimes you need to take care of things in an abstraction too so you get the right flags set and warnings produced but if you do this at the beginning portability almost happens by itself.
If you dig into the Windows SDK you’ll find some functions are wrappers for other functions. Memory is getting vague here but I think some Windows functions are posix wrappers.
Let me put it like this: if you do a fresh install of Windows 3.1, this program will not run on it.
In the real world people do support to service packs or patch version numbers.
In rare instances you can get libraries to add backwards compatibility to compilers at compile time. This happened when VS 2005 dropped support for a few kernel routines. This niggled me at the time but my portability layer could manage it and I have no problem shipping two or more binaries for seperate platforms in an install package, or seperate install packages.
At one point in time one reason why game developers instisted on Windows 9x user installing IE5 wasn’t because IE5 was needed but the install package included a collection of patches such as an uprated networking stack which were needed. It was just less bother to insist on IE5 being installed to provide support than go down a list of patches checking of which OS version the user was running and with what patches.
Short version? Nobody cares.
Nobody cares today.
Now imagine it’s 1994. You don’t have Internet, that arrives a couple years later. You go to run a program, and it fails saying it’s an invalid executable format. What do you do?
Back then, it was customary for software to distribute its own runtimes, because users didn’t really have great alternative options to find them. That creates two big problems: if you’re distributing on floppy disk, the size of your program just quadrupled; and you still need an installer that can run without a runtime in order to bootstrap everything. Remember, in 16 bit land, there were no Visual C++ runtimes – the only option was static linking.
I don’t think I ever saw a program target Windows 3.1 that used Win32s. Win32s was only used as a joke to demonstrate that simple 32 bit software that wasn’t designed for 3.1 could run. Even things like IE4/5 were native 16 bit programs, with 16 bit backports of the 32 bit common control libraries.
As for VS 2005, you’re referring to things like GetLongPathName, right? I really think somebody goofed with that release – the DLL is stamped as 4.0 but they didn’t read the documentation that said the API is available on 4.0 if you include NewApis.h. Because the API was marked as 4.0, the DLL still builds but doesn’t run on 4.0.
@malxau
I never used Win32S but I was pretty hot on portability. If I coded anything to run on Windows 3.1 for the lulz I wouldn’t rule out using it. I just like portability so if I can get something to run on on a ten versions behind OS even if only in theory I’m not going to ignore that.
Computers were still a novelty back then. I think people would just go along with it if you had another dozen discs to install. It was also a different world back them. Mostly businesses, enthusiasts, and developers. Mail through the post, dial up telephone, and so on were how it was back then. Didn’t they dish out Win32s on magazine cover discs? Other than this I know what you mean.
I can’t remember the details but what you say sounds vaguely similar. That and Microsoft goofing 16 bit support.
I don’t code now so it’s all water under the bridge.
malxau,
Size wasn’t an issue for Win32s. For one, lots of software was being distributed on CD, because lots of people had CD drives. The Warcraft II Map Editor is a great example.
But, even distributing on floppies wasn’t a problem. Photoshop 3.0 was a Win32s program, and that was on 5 floppies.
meanwhile you only have to check steam for example so read how good backwards compatibility on windows really is.
anyway, this is another reason open source is so important, a binary might not work on a curent version of windows/linux, but in most cases a simple recompile might all you need to do.