Linked by MOS6510 on Fri 17th May 2013 22:22 UTC
Hardware, Embedded Systems "It is good for programmers to understand what goes on inside a processor. The CPU is at the heart of our career. What goes on inside the CPU? How long does it take for one instruction to run? What does it mean when a new CPU has a 12-stage pipeline, or 18-stage pipeline, or even a 'deep' 31-stage pipeline? Programs generally treat the CPU as a black box. Instructions go into the box in order, instructions come out of the box in order, and some processing magic happens inside. As a programmer, it is useful to learn what happens inside the box. This is especially true if you will be working on tasks like program optimization. If you don't know what is going on inside the CPU, how can you optimize for it? This article is about what goes on inside the x86 processor's deep pipeline."
Thread beginning with comment 562155
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[6]: Comment by Drumhellar
by theosib on Mon 20th May 2013 14:12 UTC in reply to "RE[5]: Comment by Drumhellar"
theosib
Member since:
2006-03-02

My opinion is that this is less about more compute power and more about the limits of compiler developers. This reminds me of Ray Kurzweil's stupid singularity thing, which seems to imply that the instant that computers are as fast as the human brain, they'll magically develop human intelligence. It doesn't matter how fast they are if we don't know the algorithms for human intelligence. And we still don't.

There's the same problem with compilers. I'm reminded of two events in computer history. One is LISP machines, and the other is Itanium. In both cases, hardware designers assumed that a "sufficiently smart compiler" would be able to take advantage of their features. But people were not able to develop those sufficiently smart compilers. Consider predicated execution for Itanium. Predication turns out to be a hard problem. With architectures (like ARM32) that have only one predicate, it gets used SOME of the time. Itanium has an array of 64 predicate bits. Humans can specially craft examples that show the advantages of the Itanium ISA, but compilers just don't exist that can do that well in the general case.

Reply Parent Score: 3

RE[7]: Comment by Drumhellar
by Alfman on Mon 20th May 2013 15:22 in reply to "RE[6]: Comment by Drumhellar"
Alfman Member since:
2011-01-28

theosig,

"This reminds me of Ray Kurzweil's stupid singularity thing, which seems to imply that the instant that computers are as fast as the human brain, they'll magically develop human intelligence. It doesn't matter how fast they are if we don't know the algorithms for human intelligence. And we still don't."

I don't know if I could believe that. I think we might eventually get computers that are convincing enough to mentally pass as human and be indiscernible in every non-physical test, and yet I'd still have alot of trouble considering that any such invention could be sentient because I "know" that it's not, but then again it's hard to understand what consciousness is at all.


"There's the same problem with compilers. I'm reminded of two events in computer history. One is LISP machines, and the other is Itanium. In both cases, hardware designers assumed that a 'sufficiently smart compiler' would be able to take advantage of their features."

I know what you mean, however it's not necessarily the case that we'd have to solve such problems directly. With brute forcing (or optimized variants like genetic algorithms) the programmer doesn't solve the problem at all, but writes a fitness function who's sole purpose is to rate the success of solutions that are derived in random and/or evolutionary ways.


There was a pretty awesome game I played (a java applet) many years ago where you would specify the fitness function, and the game would evolve 2d "creatures" with muscles and basic neurons and after a few thousand iterations you'd have creatures that could walk. More iterations and they could avoid obstacles. Maybe it would be possible to develop a genetic algorithm for the compiler as well. It's kind of what I meant earlier, even naive (unintelligent) approaches like this can produce good results, given enough iterations.

Reply Parent Score: 2

RE[8]: Comment by Drumhellar
by theosib on Mon 20th May 2013 17:58 in reply to "RE[7]: Comment by Drumhellar"
theosib Member since:
2006-03-02

Yeah, we call that genetic programming. There's lots of interesting work in that area.

Reply Parent Score: 2

RE[8]: Comment by Drumhellar
by asb_ on Mon 20th May 2013 18:01 in reply to "RE[7]: Comment by Drumhellar"
asb_ Member since:
2013-05-17

The problem is the number of iterations. The state space for finding an optimal layout for a given FPGA design would be immense, I would imagine. Something in the order of the universes age (in hours or even seconds). All the computers on earth wouldn't be able to search that state space in any reasonable period of time.

Think about a simple problem like all the permutations of a 32-bit 1024x1024 bitmap. The amount of possibilities are huge..

Still, with an intelligent algorithm, that can reduce this state space without sacrificing optimal design, there is potential I'd say.

Reply Parent Score: 2