Linked by David Adams on Tue 22nd Feb 2011 19:52 UTC, submitted by estherschindler
General Development Your company is ready to upgrade its custom applications from 32-bit to 64-bit. (Finally.) Here's 10 tips to help you make the transition as painless as possible.
Order by: Score:
Comment by Laurence
by Laurence on Tue 22nd Feb 2011 20:09 UTC
Laurence
Member since:
2007-03-26

Never mind 32/64bit apps, I'm still waiting for my organisation to migrate it's remaining web clouds to HTML5.

Most of our staff are still stuck on IE6 because of short-sighted developers writing bespoke software that are incompatible with other browsers (or even later versions of the same browser).

The whole 64bit migration probably wont happen for another decade at the pace my organisation seems to move. But then such is life in the public sector.

Edited 2011-02-22 20:09 UTC

Reply Score: 2

?
by d.marcu on Tue 22nd Feb 2011 20:47 UTC
d.marcu
Member since:
2009-12-27

I'm not a programmer, but i have a question: Why is it so hard to migrate programs from 32 to 64 bit? I mean, in Linux i just recompile them on the architecture that i want.The first thing that i do to a fresh Linux install is to remove alsa/pulse audio and compile oss4, regardless that i use 32 bit or 64 bit(as an example).

Reply Score: 2

RE: ?
by Delgarde on Tue 22nd Feb 2011 20:58 UTC in reply to "?"
Delgarde Member since:
2008-08-19

If you're written perfect software, then it's not hard at all. But nobody writes perfect software, and so code might be written with invalid assumptions - e.g that the size of a pointer is the same as the size of a standard integer, or more generically, that a pointer can be stored in an int type. Which is true when the pointer is 32-bit, but when the code is compiled 64-bit, that's not the case, and things either fail to compile, or break with memory issues at runtime.

Keep in mind it's not necessarily deliberate - it's just that because this code worked fine for a decade or more, it's clearly ok. It might be wrong, but because a pointer is the same size as a 32-bit integer, it works and goes unnoticed. Until, that is, the pointer size becomes 64-bit.

Reply Score: 4

RE[2]: ?
by t3RRa on Tue 22nd Feb 2011 22:07 UTC in reply to "RE: ?"
t3RRa Member since:
2005-11-22

The size of int (integer) is depended on whether its 16, 32 or 64 bit arch and OS.. What it really matters is that in many of programs developers have assumed the size of int is 4 bytes which is true only in 32 bit as I usually do (but in my case it definitely be the case anyway)

Reply Score: 3

RE[3]: ?
by Carewolf on Tue 22nd Feb 2011 22:45 UTC in reply to "RE[2]: ?"
Carewolf Member since:
2005-09-08

No, an int is 32bit on both 32bit and 64bit architectures. On windows even a long is still 32bit on 64bit, though on 64bit linux a long changes from 32bit to 64bit.

Integers are really only a problem if you try to store pointers in them, and that is a really odd sick thing to do.

You have much more problem with updating system-calls, and wierd interfaces that changes API depending on the architecture (like ODBC).

Reply Score: 5

RE[4]: ?
by anda_skoa on Wed 23rd Feb 2011 11:55 UTC in reply to "RE[3]: ?"
anda_skoa Member since:
2005-07-07

No, an int is 32bit on both 32bit and 64bit architectures.


Well, yes and no.
Yes in the sense that I personally don't know any 32bit environment either where this wouldn't be true but also No because the only thing you can safely assume is that an int is not shorter than a char and a char is at least wide enough to hold 8 bits.

That's why there are types with specified lengths, e.g. int32_t

Reply Score: 2

RE[2]: ?
by Valhalla on Wed 23rd Feb 2011 15:17 UTC in reply to "RE: ?"
Valhalla Member since:
2006-01-24

If you're written perfect software, then it's not hard at all. But nobody writes perfect software, and so code might be written with invalid assumptions - e.g that the size of a pointer is the same as the size of a standard integer, or more generically, that a pointer can be stored in an int type. Which is true when the pointer is 32-bit, but when the code is compiled 64-bit, that's not the case, and things either fail to compile, or break with memory issues at runtime.


But why would you assume that a pointer is the size of an int? When dealing with pointers you use pointers, not int's. You want an array of pointers, you create an array of pointers, not an array of int's. A pointer is a data type just like int,short,char,float,double,long and you can perform arithmetic on it, and you can do a simple sizeof to verify it's length. I see no excuses (nor logic) for assuming a pointer is the size of an int, that's just crazy.

Reply Score: 2

RE[3]: ?
by malxau on Wed 23rd Feb 2011 19:44 UTC in reply to "RE[2]: ?"
malxau Member since:
2005-12-04


But why would you assume that a pointer is the size of an int? When dealing with pointers you use pointers, not int's...I see no excuses (nor logic) for assuming a pointer is the size of an int, that's just crazy.


In an ideal world that's all fair and good, but the world is rarely that ideal. One place where this is done in Windows is in the application message pump. Every message has the same two arguments: a WPARAM and an LPARAM. For some messages extra information was required that couldn't fit in two 32-bit fields, so often LPARAM would point to some extra allocation. But for other message types it's a number, and for others it's a flags field...

So when porting to Win64, LPARAM needed to be retyped from LONG to LONG_PTR which allows it to remain a numeric field, but also be long enough to contain a pointer value to support messages that pass pointer values.

The thing for application developers to watch for is imperfect casts. If an app calls "SendMessage( hWnd, blah, blah, (LONG)(mystruct *)foo);" then on Win32 this will work fine, but on Win64 will cause a subtle pointer truncation. if (LPARAM) were used instead of (LONG) things would be fine, but on Win32 those are the same type.

Reply Score: 2

RE[4]: ?
by Valhalla on Thu 24th Feb 2011 18:18 UTC in reply to "RE[3]: ?"
Valhalla Member since:
2006-01-24


The thing for application developers to watch for is imperfect casts. If an app calls "SendMessage( hWnd, blah, blah, (LONG)(mystruct *)foo);" then on Win32 this will work fine, but on Win64 will cause a subtle pointer truncation. if (LPARAM) were used instead of (LONG) things would be fine, but on Win32 those are the same type.

Yes, but again if you follow the API call structure you will be fine, in this case use LPARAM, WPARAM rather than what they happen to be defined as. Which is why it is important to never assume anything about API types since they can change 'behind the scenes' which can break your application if you 'assume' anything. When dealing with foreign code that you can't manipulate, simply stick to the interface provided or set yourself up for a potential ton of headache.

Reply Score: 2

RE[5]: ?
by malxau on Sat 26th Feb 2011 08:58 UTC in reply to "RE[4]: ?"
malxau Member since:
2005-12-04

Yes, but again if you follow the API call structure you will be fine, in this case use LPARAM, WPARAM rather than what they happen to be defined as. Which is why it is important to never assume anything about API types since they can change 'behind the scenes' which can break your application if you 'assume' anything.


This triggered some long lost repressed memory. I went back to my archives, and sure enough, WPARAM/LPARAM didn't originally exist - Windows 3.0 and earlier used WORD and DWORD directly. WPARAM/LPARAM were created to facilitate the move to Win32 (where WPARAM moved from 16 to 32 bits.) But any code that predates that - and there is a surprising amount - might have just been coded with the documented and defined type, and now find itself broken.

Reply Score: 2

RE[2]: ?
by rexstuff on Wed 23rd Feb 2011 19:39 UTC in reply to "RE: ?"
rexstuff Member since:
2007-04-06

I find it's less to do with having written perfect software and more to do with the programmers trying to be more clever than they actually are. Trying to do things like custom pointer arithmetic or weird signed operations will break an app in a bad when moving between 32 and 64 bit. The few extra clock cycles you save are in no way worth the portability penalty.

Laziness can also play a part. Assuming 'int' and 'void *) are the same size, instead of using something like uintptr_t.

The moral of the story: don't try to outsmart the compiler. You will regret it, eventually.

Of course, if you're relying on some third-party library, which doesn't have a stable 64-bit version, that's a different story (and raises the question: why are you relying on a poorly supported third party library?)

Reply Score: 1

RE[3]: ?
by saso on Wed 23rd Feb 2011 20:27 UTC in reply to "RE[2]: ?"
saso Member since:
2007-04-18

I find it's less to do with having written perfect software and more to do with the programmers trying to be more clever than they actually are. Trying to do things like custom pointer arithmetic or weird signed operations will break an app in a bad when moving between 32 and 64 bit. The few extra clock cycles you save are in no way worth the portability penalty.


It isn't necessarily as simple as that. Very hot inner loops, particularly in graphics processing software, can necessitate a few dirty tricks to get the maximum performance out of the hardware. However, such occurrences should be few and far between, and should always be clearly marked as such and handled with care. What you describe is what comes out of a programmer doing what exactly Donald Knuth warned against: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil".

Reply Score: 1

This article is 10 years late
by rom508 on Tue 22nd Feb 2011 22:07 UTC
rom508
Member since:
2007-04-20

It's funny how 64-bit architectures have been around for decades, and still to this day people are posting 32-to-64 bit application migration howtos.

This really tells something about the quality of programmers out there. Oh and I also blame this on x86, if half of programmers were learning to program on SPARC, we wouldn't have as many byte order, or data alignment issues in the code out there.

Reply Score: 5

Bad habits coming back to haunt you
by saso on Tue 22nd Feb 2011 22:22 UTC
saso
Member since:
2007-04-18

Many of the points in this article seem related to dealing with problems caused by poor programming practices:

3. Data Access

Anybody who writes a struct directly from memory to disk deserves to have their fingers broken... (joking).

4. 64-bit Programming Rules and Skills

The problem is that managed environments, training and schools have encouraged an attitude that programming is just manual labor of writing lots of text, and essentially downgraded programmers to simple code monkeys.

5. Operating System Feature Access

The example with the Windows registry here is just a plain example of bad habits proliferating. App developers used the registry in a clearly stupid way (putting binary-format dependant data in it), forcing Microsoft to come up with an even stupider approach to dealing with 64-bit apps and the registry.

7. Trick Code

Sometimes unavoidable, such as when writing a very hot inner code loop in assembly, but every such occurrence should be clearly separated out in the application into a platform-specific section of the source tree (e.g. Linux's source tree contains an arch/ subdirectory holding all architecture-specific code). Sprinkling these gems around in generic code just reeks of utter ignorance.

9. Supporting Hardware

No shit, Sherlock. Each time you interact with data sources external to your own address space, you CONVERT the damn data. You don't just read the bits and put them in memory - case in point, ever wonder what that "network-byte-order" thing is?

What I see in most of my peers is that they genuinely lack any sort of detailed understanding of how computers operate at a low level. To make matters worse, many managed environments nowadays hide all the complexities in programming (like pointer arithmetic), leaving inexperienced programmers at a loss when they actually do encounter a low-level problem that the management environment didn't manage quite as well as they'd hoped. I think schools and employee training programs should (re)introduce courses in assembly programming, best across a wide variety of platforms (CISC/RISC, endianness, word-size differences, large memory environments down to microcontrollers). Or maybe I'm wrong and just happen to run into the wrong people all the time.

Reply Score: 5