Linked by Thom Holwerda on Fri 28th Sep 2012 21:51 UTC, submitted by MOS6510
General Development "When I started writing programs in the late 80s it was pretty primitive and required a lot of study and skill. I was a young kid doing this stuff, the adults at that time had it even worse and some of them did start in the punch card era. This was back when programmers really had to earn their keep, and us newer generations are losing appreciation for that. A generation or two ago they may have been been better coders than us. More importantly they were better craftsmen, and we need to think about that." I'm no programmer, but I do understand that the current crop of programmers could learn a whole lot from older generations. I'm not going to burn my fingers on if they were better programmers or not, but I do believe they have a far greater understanding of the actual workings of a computer. Does the average 'app developer' have any clue whatsoever about low-level code, let alone something like assembly?
Permalink for comment 537065
To read all comments associated with this story, please click here.
RE[3]: Get off my lawn!
by Alfman on Sun 30th Sep 2012 17:23 UTC in reply to "RE[2]: Get off my lawn!"
Alfman
Member since:
2011-01-28

Doc Pain,

"Real programmers don't rely on nebulous optimization that are less optimum, as known from 'The Story of Mel, a real programmer'. :-)"

Haha, it's a fun story. But that sounds more like the manifesto of a defiant programmer than anything having practical value.

Hell, even in intel processor terms I've seen similar patterns, like programmers who relied on the memory wrapping behaviour of the 8086 at the 1M boundary due to the processor's original limit of 20 address lines. It was ridiculous to rely on that quirk, yet some (microsoft) programmers did and consequently there's been a number of hardware hacks to control the A20 address line ever since.

http://www.openwatcom.org/index.php/A20_Line

I do appreciate clever tricks as a form of CS "art", however I kind of hope the employees responsible for the A20 mess were fired over it since it was very irresponsible.

I myself have often cited shortcomings of the GCC optimiser, leaving me to contemplate whether to use non-portable assembly or to use GCC's code as is. In most cases suboptimal code is irrelevant in the scheme of the program so it's not even worth looking at. However in very tight loops such as those in encryption/compression/etc algorithms, hand optimisation can make an observable difference.

In any case, suffice it to say that GCC can handle the multiply by shift/addition on it's own. So my personal preference is to see x*CONST in code.

Is (((x<<1)+x)<<1)+x faster than x*7? I don't know without profiling it. On x86 it'd compile down to two LEA opcodes, which are darn fast. What about (x<<3)-x? Long story short, I'd rather let GCC handle it when it can since it's architecture specific anyway.

Reply Parent Score: 2