TinyClock is a tiny true 5-arch universal Mac OS X single-binary GUI application.
Single universal binary, that can be natively executed on every hardware platform Mac OS X was made for (32/64 bit, PowerPC/x86/AppleSilicon).
Just fun.
TinyClock is a tiny true 5-arch universal Mac OS X single-binary GUI application.
Single universal binary, that can be natively executed on every hardware platform Mac OS X was made for (32/64 bit, PowerPC/x86/AppleSilicon).
Just fun.
It’s not five arch – it’s three. It’s x86 + ppc32 + arm64. He comments that this is enough to run on any system, but it’s not the whole set.
x64 is needed to work on latest versions of macOS because these don’t handle 32 bits anymore.
Yes.
PPC 32bit
PPC 64bit
x86 32bit
x86 64bit
ARM 64bit
Agree that x64 ought to be needed for something that’s fully Universal. But that’s not what’s in this package. Here’s the output of “file”:
TinyClock: Mach-O universal binary with 3 architectures: [i386:Mach-O executable i386
– Mach-O executable i386] [ppc:Mach-O executable ppc
– Mach-O executable ppc] [arm64:Mach-O 64-bit executable arm64Mach-O 64-bit executable arm64]
TinyClock (for architecture i386): Mach-O executable i386
TinyClock (for architecture ppc): Mach-O executable ppc
TinyClock (for architecture arm64): Mach-O 64-bit executable arm64
Here’s the output of lipo:
Fat header in: TinyClock
fat_magic 0xcafebabe
nfat_arch 3
architecture i386
cputype CPU_TYPE_I386
cpusubtype CPU_SUBTYPE_I386_ALL
capabilities 0x0
offset 4096
size 26492
align 2^12 (4096)
architecture ppc
cputype CPU_TYPE_POWERPC
cpusubtype CPU_SUBTYPE_POWERPC_ALL
capabilities 0x0
offset 32768
size 26792
align 2^12 (4096)
architecture arm64
cputype CPU_TYPE_ARM64
cpusubtype CPU_SUBTYPE_ARM64_ALL
capabilities 0x0
offset 65536
size 51832
align 2^14 (16384)
See the second footnote – the text right at the bottom of the page.
The binaries were rebuilt and now include all the slices:
lipo -info TinyClock.app/Contents/MacOS/TinyClock
Architectures in the fat file: TinyClock.app/Contents/MacOS/TinyClock are: i386 x86_64 ppc64 ppc arm64
I find it amusing that, in the Windows space, there was blood, sweat and tears regarding the 64-bit transition (because every 32-bit driver got invalidated and NTVDM got nixed), yes MacOS X changes architectures as if it’s nobody’s business.
macOS is basically NeXTStep which has been jumping from architecture to architecture since the 80s: m68k, x86, PPA, SPARC, PPC, x64, ARM32, ARM64
They also got it right with the fat binary concept from the get go. Steve Jobs put together an excellent team, specially all the folks from CMU/Mach, when it came to NeXT. And he allowed them to get things right, specially in terms of OS/SW architecture.
In contrast, Windows NT came later and was supposed to be portable from the get go. But their SW architecture was never as consistent, and yet much broader. Which ended up making a much less portable system in the end.
javiercero1,
To be fair to Microsoft, they’ve handled backwards compatibility for software well, longer than apple even.
https://www.cultofmac.com/103458/os-x-lion-kills-rosetta-powerpc-support-heres-what-to-do-about-it/
Of course drivers are a completely different matter and my impression is that neither windows nor macos are great here. Kext drivers on macos can be fragile. The internet shows lots of users experiencing errors with kext drivers for parallels, FUSE, extfs, etc. Here’s an example with zfs.
https://github.com/openzfsonosx/zfs/issues/582
It’s quite similar to windows in that if a new driver is available, you update it and be on your way, but if not then you may be stuck with an old OS to have working drivers. This has been a big gripe for me on windows.
I think windows NT kernel is portable in the same sense that linux is portable, that is to say at the source level. You need to recompile the source code to support new architectures. This works well with FOSS, but with windows we’re dealing with lots of proprietary drivers and binary software that is effectively not portable without emulation.
“In contrast, Windows NT came later and was supposed to be portable from the get go. But their SW architecture was never as consistent, and yet much broader. Which ended up making a much less portable system in the end.”
That’s not really true at all, Windows is highly portable and has been ported to just as many platforms as NeXT/OSX. It’s had releases on x86, x64, Itanium, DEC Alpha, PowerPC, MIPS, and Arm. On Arm it even runs x86/x64 binaries reasonably well.
@ bradleytompkins
The NT kernel is somewhat portable, the SW system built on top of it most definitively was not. Which is why the vast majority of SW for NT was only targeted to x86.
BTW, the x64 emulation on Windows/ARM is barely coming online right now.
javiercero1,
How is this different from macos software? Regardless of operating system you still need the developers of proprietary software to port their software to new architectures, even if it’s just a rebuild.
Isn’t it simply because x86 has the lions share of users? As a developer, that’s big reason to target one architecture and not another.
Because OSX has had the concept of universal binaries for ever,.
Multiarch in OSX has been almost transparent to developers, from the days of NeXTStep. That has not quite been the case for NT.
There is a reason why Apple has had 3 major Arch transitions in the lifetime of OSX, and NT never really moved, in a meaningful way, from x86.
javiercero1,
Yes, but the fat binary itself is just a container. Everything remains 100% dependent on whether the author chooses to support a platform. Consider that these “universal binaries forever” don’t do PPC owners much good today because the container itself does not make a build portable to PPC.
Do you consider Blender, the open source 3d software, to be portable?
Note that they distribute two packages instead of one: a package for apple silicon macs and another for intel macs.
For most people software portability means that the software is available across platforms and not that the binaries for every platform are in one package.
It’s literally a drop down in visual studio.
Bradleytompkins already pointed out that windows has been ported to numerous architectures. The reason these alternatives weren’t popular is that they weren’t competitive with x86 offerings. It’s really that simple. Heck even apple themselves found that x86 was best for their own computers until they decided to make their own.
Fat binaries is “getting it right”? The fact it relies on the whim of the developer to support the individual architectures is enough clue that it’s not. Keep in mind fat binaries make the executable fatter, so there is a temptation for developers to remove support for the old architecture(s) as soon as they consider it viable (according to their whim).
Generally, no mainstream OS is truly portable across architectures. Even Android (which was supposed to run all apps in a VM) eventually acquired the ability to run native apps for OpenGL games and such and as a result ended up having a preference for the ARM architecture. Which is one of the reasons Intel failed in the Android space despite their SoCs doing well in AnTuTu: some OpenGL games had to rely on emulation, which sometimes didn’t even work and some other times introduced a performance penalty.
Personally, I am partial to picking an architecture and sticking with it. The only reason Apple can change architectures is because they fully control the hardware, so users who depend on MacOS have no choice.
@kurkosdr
Fat binaries have been one the most successful approaches to multiarch SW deployment in the consumer space. The reason why Apple can change architectures is because they have a very good SW architecture team. And they have the track record to back it up.
kurkosdr,
I agree, why keep around bloated binaries for legacy hardware you don’t even have? Of course selecting the right binary might be complicated for novices, so there’s that.
Ultimately, apple is set on killing sideloaded software for normal users. This means the benefits of distributing fat binaries is greatly diminished. The apple store can detect the user’s architecture and download the appropriate binaries for them without the bloat of unrelated/obsolete architectures.
@Alfman
In an OSX app package, the arch specific portion of it is tiny compared to the common/shared elements/frameworks which are arch agnostic.
I wouldn’t necessarily call a few MBs “bloated.” Again, this is not the 90s, we measure storage in GBs now.
javiercero1,
That’s not true of the software I develop at work. However I wanted to find something public to put those assertions to the test, so I took a look at firefox. With a bit of scripting I found that the bulk (~78.4%) of the DMG is comprised of universal binaries (containing x86_64 and amd64).
Using the “lipo” tool I took those Mach-O binaries and extracted the thin equivalents as x86_64 and arm64 binaries respectively. Here’s a spreadsheet showing the size of the binaries after thinning.
https://ibb.co/3ds8TcB
Observe that the FAT binaries did not provide any resource savings for common/shared elements when combining x86_64 and arm64 binaries. The FAT binaries have roughly 100% overhead compared to the thin binaries of either architecture. Yes, I would call this bloat. I concede maybe this example isn’t representative and if you want to look at another example then by all means do.
You skipped over something I thought was important to discuss: Apple doesn’t want owners to sideload software anymore, apple wants them to use apple’s store. In this case it’s really hard to see the benefit of FAT binaries. The store should already know the architecture of the device requesting the application, so is there any reason for that device to download and use FAT binaries containing other architectures? IMHO these are unlikely to ever be used and it seems like a waste of bandwidth and storage.
I didn’t catch it in time but the columns in my last link are reversed. x86_64 is actually arm64 and visa versa.
Again, for people still stuck in the 90s anything is bloat.
A few MBs in a time of GBs is noise. Plus you can always slim your installed universal binaries and purge the other arch if you’re desperate for space.
javiercero1,
That’s a non-justification for adding 100% bloat to binaries.
Yes, except I think apple should do it on behalf of users by automatically detecting their architecture and only sending them the resources they’ll actually need to run the software. That’s just the common sense thing to do.
I wonder if someone with the skills could continue to update MacOS source code against PPC64. Apple still releases it but it’s no longer an ISO and it’s X64 only. I am sure an IBM P7 or even more so a P9 would happily run MacOS12.
There’s a great deal of MacOS that’s proprietary and closed-source, though much of the core OS is available as free/open source software, even if it’s not Apple-derived. What i’m really surprised about, is that there’s never been any real impetus to create an open-source MacOS clone, a la ReactOS, Haiku etc. Essentially all the components are there (GNUstep, Darwin), it just needs some gluing and patching to create a full binary-compatible MacOS clone.
There will probably more demand for this with Apples move away from Intel.