Linked by David Adams on Tue 22nd Feb 2011 19:52 UTC, submitted by estherschindler
General Development Your company is ready to upgrade its custom applications from 32-bit to 64-bit. (Finally.) Here's 10 tips to help you make the transition as painless as possible.
Thread beginning with comment 463598
To read all comments associated with this story, please click here.
?
by d.marcu on Tue 22nd Feb 2011 20:47 UTC
d.marcu
Member since:
2009-12-27

I'm not a programmer, but i have a question: Why is it so hard to migrate programs from 32 to 64 bit? I mean, in Linux i just recompile them on the architecture that i want.The first thing that i do to a fresh Linux install is to remove alsa/pulse audio and compile oss4, regardless that i use 32 bit or 64 bit(as an example).

Reply Score: 2

RE: ?
by Delgarde on Tue 22nd Feb 2011 20:58 in reply to "?"
Delgarde Member since:
2008-08-19

If you're written perfect software, then it's not hard at all. But nobody writes perfect software, and so code might be written with invalid assumptions - e.g that the size of a pointer is the same as the size of a standard integer, or more generically, that a pointer can be stored in an int type. Which is true when the pointer is 32-bit, but when the code is compiled 64-bit, that's not the case, and things either fail to compile, or break with memory issues at runtime.

Keep in mind it's not necessarily deliberate - it's just that because this code worked fine for a decade or more, it's clearly ok. It might be wrong, but because a pointer is the same size as a 32-bit integer, it works and goes unnoticed. Until, that is, the pointer size becomes 64-bit.

Reply Parent Score: 4

RE[2]: ?
by t3RRa on Tue 22nd Feb 2011 22:07 in reply to "RE: ?"
t3RRa Member since:
2005-11-22

The size of int (integer) is depended on whether its 16, 32 or 64 bit arch and OS.. What it really matters is that in many of programs developers have assumed the size of int is 4 bytes which is true only in 32 bit as I usually do (but in my case it definitely be the case anyway)

Reply Parent Score: 3

RE[2]: ?
by Valhalla on Wed 23rd Feb 2011 15:17 in reply to "RE: ?"
Valhalla Member since:
2006-01-24

If you're written perfect software, then it's not hard at all. But nobody writes perfect software, and so code might be written with invalid assumptions - e.g that the size of a pointer is the same as the size of a standard integer, or more generically, that a pointer can be stored in an int type. Which is true when the pointer is 32-bit, but when the code is compiled 64-bit, that's not the case, and things either fail to compile, or break with memory issues at runtime.


But why would you assume that a pointer is the size of an int? When dealing with pointers you use pointers, not int's. You want an array of pointers, you create an array of pointers, not an array of int's. A pointer is a data type just like int,short,char,float,double,long and you can perform arithmetic on it, and you can do a simple sizeof to verify it's length. I see no excuses (nor logic) for assuming a pointer is the size of an int, that's just crazy.

Reply Parent Score: 2

RE[2]: ?
by rexstuff on Wed 23rd Feb 2011 19:39 in reply to "RE: ?"
rexstuff Member since:
2007-04-06

I find it's less to do with having written perfect software and more to do with the programmers trying to be more clever than they actually are. Trying to do things like custom pointer arithmetic or weird signed operations will break an app in a bad when moving between 32 and 64 bit. The few extra clock cycles you save are in no way worth the portability penalty.

Laziness can also play a part. Assuming 'int' and 'void *) are the same size, instead of using something like uintptr_t.

The moral of the story: don't try to outsmart the compiler. You will regret it, eventually.

Of course, if you're relying on some third-party library, which doesn't have a stable 64-bit version, that's a different story (and raises the question: why are you relying on a poorly supported third party library?)

Reply Parent Score: 1