Linked by Tony Bourke on Thu 22nd Jan 2004 21:29 UTC
Benchmarks When running tests, installing operating systems, and compiling software for my Ultra 5, I came to the stunning realization that hey, this system is 64-bit, and all of the operating systems I installed on this Ultra 5 (can) run in 64-bit mode.
Permalink for comment
To read all comments associated with this story, please click here.
by Rayiner Hashem on Thu 22nd Jan 2004 22:50 UTC

@Uruloki A 64-bit machine doesn't necessarily have "enhanced capabilities" over a 32-bit machine. It can use 64-bit integers, and address > 4GB of memory. Unless you are doing either of those, there is no real way to write your program to be faster on a 64-bit machine. Precisely what sort of code changes are you thinking about? Besides, in this day and age, you should almost never micro-optimize your code for a specific CPU architecture. Unless you are comfortable holding all the details of a 20-stage pipeline, different latencies for dozens of instructions, complex branch prediction, and the state of 128 rename registers in your head at once, the compiler will do a better job than you. And all your micro-optimizations will be useless when the next Intel chip comes out with different performance characteristics.

PS> Using a 16-bit integer is often a bad idea. CPUs like word-aligned data. 16-bit integers are quite often slower than 32-bit integers unless you are working with them in a way where the CPU can load two of them at a time. Often, if you put a 16-bit integer in a struct, the compiler will ignore you and pad it out to 32-bits, to maintain alignment for the fields after it.

What you said is not true. On everything >= SuperSPARC and everything >= i387, the FPU is at least 64-bits. So whether or not you use a 64-bit build, double-precision (64-bit) floating-point math will run at the same speed as single-precision (32-bit) floating-point math.