Linked by Thom Holwerda on Thu 21st Jul 2011 14:10 UTC, submitted by Jennimc
Mozilla & Gecko clones "Over the last couple of weeks, Mozilla has finally stepped up its 64-bit testing process. There are now five slaves dedicated to building Firefox for Windows x64, which means that from Firefox 8 and onwards, you'll be able to pick up 64-bit builds that are functionally identical to its 32-bit cousins but operating in native 64-bit CPU and memory space." Th 64bit version is about 10% faster, benchmarks show.
Thread beginning with comment 482027
To read all comments associated with this story, please click here.
The 64-bit experience is a LIE
by AndrewZ on Fri 22nd Jul 2011 14:08 UTC
AndrewZ
Member since:
2005-11-15

I'm going to take a stand here and point out some harsh realities. First of all, the idea that you can even tell whether a system is running on 32-bit or 64-bit is a lie. Plain and simple. For all intents and purposes you can't tell the difference. Whether running on Linux or running on Windows.

Most 32-bit applications on Windows have a 2 GB address space. This is enough for most applications except CAD, PhotoShop, and Databases. It is certainly enough for 99% of non-professional users. 2 GB is certainly enough for 99.9% of web browsing situations.

Porting a 32-bit app to 64-bit gains you at most 5 - 15% speedup, which is mostly due to optimizing for more registers and not due to more RAM or memory addressing. 5 - 15% is generally not fast enough to make a big difference in the user experience. This is about as much increase as using hyperthreading. I.E. not a whole lot.

And as for this X86-32 vs X86-64 argument the same thing applies. It's all in your mind. A user can't tell the difference. You can't tell the difference. If you ran the same Linux distro on different architectures you would not be able to tell the difference, all other things being equal, such as RAM, disk, and comparable CPU.

And as for this business of X86-* being difficult to code to? 90% of the time that's also BS. Any applications written in a high level language doesn't need architecture specific optimization. Can you come up with instances where you needed to do some fixup in C or C++? Sure. That's the exception to the rule.

Let's face the harsh reality that CPUs are here not because they give us a better, difference, or even distinguishable experience. They exist for economic reasons. ARM owns the laptop because it uses less power. And because Apple could make money off of it. Atom is now introduced not because it is better or 'different' but because Intel can make money off of it. SPARC is fading away not because it couldn't run 64-bit apps, it did something like 12 years ago. Alpha was awesome but it went away. Not because it was killer, it was, but it was more expensive than X86.

I could go on here but the fact remains. For 99% of desktop uses, 32-bit vs 64-bit is irrelevant.

Ultimately Firefox is 64-bit because it was time to happen. Everything needs to go 64-bit because that's where things are heading. Does it make a big difference to the basic user? No. High end professional workstation users? Yes. Servers? Yes. Firefox users? No.

Reply Score: 2

Alfman Member since:
2011-01-28

AndrewZ,

"I could go on here but the fact remains. For 99% of desktop uses, 32-bit vs 64-bit is irrelevant."

I agree with your overall post, I've been saying that for eons. It's more about "cool factor" than anything else. Many people don't realize that 64bit (or 128bit, gasp) won't change their experience - certainly not until apps use more ram. AMD took advantage of the incompatible upgrade from 32->64 bit to make other architectural improvements, but these have nothing to do with 32/64 bit in principal.


I do want to counter your following claim though:

"And as for this business of X86-* being difficult to code to? 90% of the time that's also BS. Any applications written in a high level language doesn't need architecture specific optimization."

I have benchmarked certain algorithms which work better for register-starved processors like the x86 compared to other algorithms on other processors.

The technical reason is, on the x86, the cost of accessing local variables (which don't fit in registers), is the same as the cost of dereferencing arbitrary pointers (both are stored in L2 cache). This has profound implications as to the choice of the most optimal high level algorithms.

Another difference is with limited registers, it can make more sense to recompute a value from registers every iteration than to pull in a pre-computed value off the stack every iteration.

I realize this is well below the level most developers operate. Code just needs to be good enough, anything more is overkill.

"Can you come up with instances where you needed to do some fixup in C or C++? Sure. That's the exception to the rule."

I think the potential for optimization is almost always there, but the NEED for it is the exception to the rule.

Reply Parent Score: 2