Username or EmailPassword
Never mind 32/64bit apps, I'm still waiting for my organisation to migrate it's remaining web clouds to HTML5.
Most of our staff are still stuck on IE6 because of short-sighted developers writing bespoke software that are incompatible with other browsers (or even later versions of the same browser).
The whole 64bit migration probably wont happen for another decade at the pace my organisation seems to move. But then such is life in the public sector. Edited 2011-02-22 20:09 UTC
I'm not a programmer, but i have a question: Why is it so hard to migrate programs from 32 to 64 bit? I mean, in Linux i just recompile them on the architecture that i want.The first thing that i do to a fresh Linux install is to remove alsa/pulse audio and compile oss4, regardless that i use 32 bit or 64 bit(as an example).
If you're written perfect software, then it's not hard at all. But nobody writes perfect software, and so code might be written with invalid assumptions - e.g that the size of a pointer is the same as the size of a standard integer, or more generically, that a pointer can be stored in an int type. Which is true when the pointer is 32-bit, but when the code is compiled 64-bit, that's not the case, and things either fail to compile, or break with memory issues at runtime.
Keep in mind it's not necessarily deliberate - it's just that because this code worked fine for a decade or more, it's clearly ok. It might be wrong, but because a pointer is the same size as a 32-bit integer, it works and goes unnoticed. Until, that is, the pointer size becomes 64-bit.
The size of int (integer) is depended on whether its 16, 32 or 64 bit arch and OS.. What it really matters is that in many of programs developers have assumed the size of int is 4 bytes which is true only in 32 bit as I usually do (but in my case it definitely be the case anyway)
No, an int is 32bit on both 32bit and 64bit architectures. On windows even a long is still 32bit on 64bit, though on 64bit linux a long changes from 32bit to 64bit.
Integers are really only a problem if you try to store pointers in them, and that is a really odd sick thing to do.
You have much more problem with updating system-calls, and wierd interfaces that changes API depending on the architecture (like ODBC).
I find it's less to do with having written perfect software and more to do with the programmers trying to be more clever than they actually are. Trying to do things like custom pointer arithmetic or weird signed operations will break an app in a bad when moving between 32 and 64 bit. The few extra clock cycles you save are in no way worth the portability penalty.
Laziness can also play a part. Assuming 'int' and 'void *) are the same size, instead of using something like uintptr_t.
The moral of the story: don't try to outsmart the compiler. You will regret it, eventually.
Of course, if you're relying on some third-party library, which doesn't have a stable 64-bit version, that's a different story (and raises the question: why are you relying on a poorly supported third party library?)
It's funny how 64-bit architectures have been around for decades, and still to this day people are posting 32-to-64 bit application migration howtos.
This really tells something about the quality of programmers out there. Oh and I also blame this on x86, if half of programmers were learning to program on SPARC, we wouldn't have as many byte order, or data alignment issues in the code out there.
Many of the points in this article seem related to dealing with problems caused by poor programming practices:
3. Data Access
Anybody who writes a struct directly from memory to disk deserves to have their fingers broken... (joking).
4. 64-bit Programming Rules and Skills
The problem is that managed environments, training and schools have encouraged an attitude that programming is just manual labor of writing lots of text, and essentially downgraded programmers to simple code monkeys.
5. Operating System Feature Access
The example with the Windows registry here is just a plain example of bad habits proliferating. App developers used the registry in a clearly stupid way (putting binary-format dependant data in it), forcing Microsoft to come up with an even stupider approach to dealing with 64-bit apps and the registry.
7. Trick Code
Sometimes unavoidable, such as when writing a very hot inner code loop in assembly, but every such occurrence should be clearly separated out in the application into a platform-specific section of the source tree (e.g. Linux's source tree contains an arch/ subdirectory holding all architecture-specific code). Sprinkling these gems around in generic code just reeks of utter ignorance.
9. Supporting Hardware
No shit, Sherlock. Each time you interact with data sources external to your own address space, you CONVERT the damn data. You don't just read the bits and put them in memory - case in point, ever wonder what that "network-byte-order" thing is?
What I see in most of my peers is that they genuinely lack any sort of detailed understanding of how computers operate at a low level. To make matters worse, many managed environments nowadays hide all the complexities in programming (like pointer arithmetic), leaving inexperienced programmers at a loss when they actually do encounter a low-level problem that the management environment didn't manage quite as well as they'd hoped. I think schools and employee training programs should (re)introduce courses in assembly programming, best across a wide variety of platforms (CISC/RISC, endianness, word-size differences, large memory environments down to microcontrollers). Or maybe I'm wrong and just happen to run into the wrong people all the time.