I assume I don’t have to explain the difference between big-endian and little-endian systems to the average OSNews reader, and while most systems are either dual-endian or (most likely) little-endian, it’s still good practice to make sure your code works on both. If you don’t have a big-endian system, though, how do you do that?
When programming, it is still important to write code that runs correctly on systems with either byte order (see for example The byte order fallacy). But without access to a big-endian machine, how does one test it? QEMU provides a convenient solution. With its user mode emulation we can easily run a binary on an emulated big-endian system, and we can use GCC to cross-compile to that system.
↫ Hans Wennborg
If you want to make sure your code isn’t arbitrarily restricted to little-endian, running a few tests this way is worth it.

I used an old ibook g4 for just this purpose…
I think Little Endian pretty much won, except for corner cases.
Back in the day RISC machines, especially spearheaded by ARM were all Big Endian. Somewhere around early 2000s, ARM processors started adding support for both, and the rest is history (people chose Little Endian overwhelmingly)
Why?
Because it makes programming with different sized registers much easier. Of course if you have native support, you can split an 16 bit AX into AH and AL. However there is a reason the 32-bit version EAX, and the later 64-bits RAX never had the same treatment. (RAX would need 8 different 1-byte, 4 different 2-bytes, and 2 different 32-bits sub-addresses. They only get 2 1-byes (originals), and 1 of every other)
Bottom line?
These are nice exercises for exotic machines, but it is unlikely the wheel of time will turn backwards.
(Except “Network Byte Order”, which the article touches)
I had Rust’s targets for Big Endian bare-metal 32-bit Arm downgraded to Tier 3 because I could not get qemu-system-arm to work correctly at all in Big Endian mode. I tried printing a string with semihosting and it came out kcabdraws.