Linked by Thom Holwerda on Fri 15th Feb 2013 10:40 UTC
General Development "Since I left my job at Amazon I have spent a lot of time reading great source code. Having exhausted the insanely good idSoftware pool, the next thing to read was one of the greatest game of all time: Duke Nukem 3D and the engine powering it named 'Build'. It turned out to be a difficult experience: The engine delivered great value and ranked high in terms of speed, stability and memory consumption but my enthousiasm met a source code controversial in terms of organization, best practices and comments/documentation. This reading session taught me a lot about code legacy and what helps a software live long." Hail to the king, baby.
Thread beginning with comment 552818
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[7]: Code Review
by Megol on Mon 18th Feb 2013 15:54 UTC in reply to "RE[6]: Code Review"
Megol
Member since:
2011-04-11

The processors have also become too complex:

- out of order execution


It's like: hey we (the processor manufacturer) have inserted a little data flow engine in your processor, this is sadly only for values in registers - memory stuff still is mostly in order. And the programmer say okay, that's nice, I guess I don't have to work as hard to make instructions start in parallel and instead optimize data flows.


- parallel execution units


See above.


- branch prediction


Okay, care to explain how this could make any difference when coding? This is a mechanism that applies predictions from dynamic patters when executing code, not something that have to be coded. Current x86 processors doesn't even support hints.


- multiple cache levels


Unless you code Fortran I don't think you'll ever see your compiler optimize towards this. But yes multi-dimensional cache blocking/tiling is a pain in the ass and making it dynamically adapt to the platform cache hierarchy almost require runtime code generation.
Which your standard compiler wont do.


- opcode rewriting


Don't know what you mean, perhaps fusing? Not a problem even for the beginner.
Macro op fusing -> CMP/TEST+Bcc is treated as one instruction.
Micro op fusing -> Intel finally stopped splitting stuff that never needed to be split. This mostly affects the instruction scheduler as load+execute instructions doesn't take two scheduler slots.
MOV elimination -> Some MOV instructions are executed in the renaming stage instead of requiring execution resources.


- SIMD


Can be a problem if one wants to get near optimal execution on several generations of processors. However thinking the compiler will make it easier is in most cases wrong, yes compilers can generate several versions of code and dynamically select which code paths should be used. But really critical code often require changes in data structures to fit the underlying hardware which the compilers will not do.

Doing the same in assembly language isn't a problem.


- NUMA


So you think your compiler will make the code NUMA aware?
This is something affected by algorithm choices.


- (put you favourite feature here)

You need to be super human to really optimize for a given processor given all the variables, and when you manage to do it, it is only for a specific model.


No it simply requires knowledge of software-hardware interaction. Assembly language programmers are also better when optimizing e.g. C code as they know that under the abstraction it's still a Von Neumann machine.


Only on the embedded space it is still an advantage to code directly in assembly.


Most embedded code is C so I guess you are wrong here too.

Reply Parent Score: 2

RE[8]: Code Review
by phreck on Wed 20th Feb 2013 16:52 in reply to "RE[7]: Code Review"
phreck Member since:
2009-08-13

"The processors have also become too complex:

- out of order execution


It's like: hey we (the processor manufacturer) have inserted a little data flow engine in your processor, this is sadly only for values in registers - memory stuff still is mostly in order. And the programmer say okay, that's nice, I guess I don't have to work as hard to make instructions start in parallel and instead optimize data flows.
"

There's also a dependency analysis. Your expressions shouldn't be too interwired.

"
- parallel execution units


See above.
"

Flatness. Don't nest too deep. In C++, I've seen a simple loop iterating over an std::vector compiled to really fine SSE code.


"
- branch prediction


Okay, care to explain how this could make any difference when coding? This is a mechanism that applies predictions from dynamic patters when executing code, not something that have to be coded. Current x86 processors doesn't even support hints.
"

This goes hand in hand with above and with speculative execution. Generally, your code should be as predictable as possible, i.e. the sooner the branch-condition is known, the better.

There's also the defaults when the CPU discovers a branch for the first time: If the CPU by default assumes "branch taken", you should structure your code so that it is less expensive to enter the body of an if statement.

Then, especially in tight loops and branches within them, the branch-conditions shouldn't vary for each iteration, because the CPU has a branch target buffer.

"
- multiple cache levels


Unless you code Fortran I don't think you'll ever see your compiler optimize towards this. But yes multi-dimensional cache blocking/tiling is a pain in the ass and making it dynamically adapt to the platform cache hierarchy almost require runtime code generation.
Which your standard compiler wont do.
"

Enter the world of cache oblivious algorithms.



No it simply requires knowledge of software-hardware interaction. Assembly language programmers are also better when optimizing e.g. C code as they know that under the abstraction it's still a Von Neumann machine.


It is not that simple actually. I've seen assembly level programmers who really know what they do, and they do it well, however, that usually was at the micro-optimization level, which doesn't scale to long-lived and/or big softwares.

I see more potential on the algorithmic level, e.g., mentioned already, cache oblivious algorithms, knowing the right algorithms, your domain, and even being able to decide at runtime which algorithm to use.




All in all I am not sure if you are evangelizing Assembly, C or compilers in your post.

Reply Parent Score: 1

RE[9]: Code Review
by Alfman on Wed 20th Feb 2013 19:30 in reply to "RE[8]: Code Review"
Alfman Member since:
2011-01-28

phreck,

"It is not that simple actually. I've seen assembly level programmers who really know what they do, and they do it well, however, that usually was at the micro-optimization level, which doesn't scale to long-lived and/or big softwares."

This is it in a nutshell, large projects cannot be written in assembly any more. But it can still be used effectively to tune pieces of a larger project.


"All in all I am not sure if you are evangelizing Assembly, C or compilers in your post."

You weren't addressing me, but I strongly prefer C. The only time I ever consider assembly is with code that I'm already benchmarking and have identified performance deficiencies. But I try to get the most out of the C version before resorting to assembly code.

Reply Parent Score: 2