Linked by David Adams on Tue 22nd Feb 2011 19:52 UTC, submitted by estherschindler
General Development Your company is ready to upgrade its custom applications from 32-bit to 64-bit. (Finally.) Here's 10 tips to help you make the transition as painless as possible.
Thread beginning with comment 463599
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE: ?
by Delgarde on Tue 22nd Feb 2011 20:58 UTC in reply to "?"
Delgarde
Member since:
2008-08-19

If you're written perfect software, then it's not hard at all. But nobody writes perfect software, and so code might be written with invalid assumptions - e.g that the size of a pointer is the same as the size of a standard integer, or more generically, that a pointer can be stored in an int type. Which is true when the pointer is 32-bit, but when the code is compiled 64-bit, that's not the case, and things either fail to compile, or break with memory issues at runtime.

Keep in mind it's not necessarily deliberate - it's just that because this code worked fine for a decade or more, it's clearly ok. It might be wrong, but because a pointer is the same size as a 32-bit integer, it works and goes unnoticed. Until, that is, the pointer size becomes 64-bit.

Reply Parent Score: 4

RE[2]: ?
by t3RRa on Tue 22nd Feb 2011 22:07 in reply to "RE: ?"
t3RRa Member since:
2005-11-22

The size of int (integer) is depended on whether its 16, 32 or 64 bit arch and OS.. What it really matters is that in many of programs developers have assumed the size of int is 4 bytes which is true only in 32 bit as I usually do (but in my case it definitely be the case anyway)

Reply Parent Score: 3

RE[3]: ?
by Carewolf on Tue 22nd Feb 2011 22:45 in reply to "RE[2]: ?"
Carewolf Member since:
2005-09-08

No, an int is 32bit on both 32bit and 64bit architectures. On windows even a long is still 32bit on 64bit, though on 64bit linux a long changes from 32bit to 64bit.

Integers are really only a problem if you try to store pointers in them, and that is a really odd sick thing to do.

You have much more problem with updating system-calls, and wierd interfaces that changes API depending on the architecture (like ODBC).

Reply Parent Score: 5

RE[2]: ?
by Valhalla on Wed 23rd Feb 2011 15:17 in reply to "RE: ?"
Valhalla Member since:
2006-01-24

If you're written perfect software, then it's not hard at all. But nobody writes perfect software, and so code might be written with invalid assumptions - e.g that the size of a pointer is the same as the size of a standard integer, or more generically, that a pointer can be stored in an int type. Which is true when the pointer is 32-bit, but when the code is compiled 64-bit, that's not the case, and things either fail to compile, or break with memory issues at runtime.


But why would you assume that a pointer is the size of an int? When dealing with pointers you use pointers, not int's. You want an array of pointers, you create an array of pointers, not an array of int's. A pointer is a data type just like int,short,char,float,double,long and you can perform arithmetic on it, and you can do a simple sizeof to verify it's length. I see no excuses (nor logic) for assuming a pointer is the size of an int, that's just crazy.

Reply Parent Score: 2

RE[3]: ?
by malxau on Wed 23rd Feb 2011 19:44 in reply to "RE[2]: ?"
malxau Member since:
2005-12-04


But why would you assume that a pointer is the size of an int? When dealing with pointers you use pointers, not int's...I see no excuses (nor logic) for assuming a pointer is the size of an int, that's just crazy.


In an ideal world that's all fair and good, but the world is rarely that ideal. One place where this is done in Windows is in the application message pump. Every message has the same two arguments: a WPARAM and an LPARAM. For some messages extra information was required that couldn't fit in two 32-bit fields, so often LPARAM would point to some extra allocation. But for other message types it's a number, and for others it's a flags field...

So when porting to Win64, LPARAM needed to be retyped from LONG to LONG_PTR which allows it to remain a numeric field, but also be long enough to contain a pointer value to support messages that pass pointer values.

The thing for application developers to watch for is imperfect casts. If an app calls "SendMessage( hWnd, blah, blah, (LONG)(mystruct *)foo);" then on Win32 this will work fine, but on Win64 will cause a subtle pointer truncation. if (LPARAM) were used instead of (LONG) things would be fine, but on Win32 those are the same type.

Reply Parent Score: 2

RE[2]: ?
by rexstuff on Wed 23rd Feb 2011 19:39 in reply to "RE: ?"
rexstuff Member since:
2007-04-06

I find it's less to do with having written perfect software and more to do with the programmers trying to be more clever than they actually are. Trying to do things like custom pointer arithmetic or weird signed operations will break an app in a bad when moving between 32 and 64 bit. The few extra clock cycles you save are in no way worth the portability penalty.

Laziness can also play a part. Assuming 'int' and 'void *) are the same size, instead of using something like uintptr_t.

The moral of the story: don't try to outsmart the compiler. You will regret it, eventually.

Of course, if you're relying on some third-party library, which doesn't have a stable 64-bit version, that's a different story (and raises the question: why are you relying on a poorly supported third party library?)

Reply Parent Score: 1

RE[3]: ?
by saso on Wed 23rd Feb 2011 20:27 in reply to "RE[2]: ?"
saso Member since:
2007-04-18

I find it's less to do with having written perfect software and more to do with the programmers trying to be more clever than they actually are. Trying to do things like custom pointer arithmetic or weird signed operations will break an app in a bad when moving between 32 and 64 bit. The few extra clock cycles you save are in no way worth the portability penalty.


It isn't necessarily as simple as that. Very hot inner loops, particularly in graphics processing software, can necessitate a few dirty tricks to get the maximum performance out of the hardware. However, such occurrences should be few and far between, and should always be clearly marked as such and handled with care. What you describe is what comes out of a programmer doing what exactly Donald Knuth warned against: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil".

Reply Parent Score: 1