To view parent comment, click here.

To read all comments associated with this story, please click here.

The problem is the number of iterations. The state space for finding an optimal layout for a given FPGA design would be immense, I would imagine. Something in the order of the universes age (in hours or even seconds). All the computers on earth wouldn't be able to search that state space in any reasonable period of time.

Think about a simple problem like all the permutations of a 32-bit 1024x1024 bitmap. The amount of possibilities are huge..

Still, with an intelligent algorithm, that can reduce this state space without sacrificing optimal design, there is potential I'd say.

asb_,

"The problem is the number of iterations. The state space for finding an optimal layout for a given FPGA design would be immense, I would imagine. Something in the order of the universes age (in hours or even seconds). All the computers on earth wouldn't be able to search that state space in any reasonable period of time."

Having never studied the problem in any serious detail, I cannot know the full scope. Never the less the typical software engineering way to work with such problems is to "divide and conquer" them. So if we have a program that needs to be converted into FPGA logic, assuming the full state of the program is too big to handle (since complexity grows exponentially with size of the program), then we could break it up into individual functions and optimize those independently from the rest of the system. This way it's not necessarily brute forcing the entire space at once, just pieces of the puzzle at a time.

"Still, with an intelligent algorithm, that can reduce this state space without sacrificing optimal design, there is potential I'd say."

Well, if our goal isn't to find the global maxima, then it becomes far easier to search for local maxima. Consider a multidimensional function for which we want to find the "best" or highest (or lowest) point. The search space could be immense, the "intelligent" way to find the maxima is use calculus to determine the roots and evaluate the function at each root. However when the calculus is not feasible (or when it produces too many roots), then we can cheat by solving the easier problem of searching for local maxima instead. Starting with candidates at random, we can trivially use newton's method to find peaks in the graph. By using sheer computing power we could find many such local maximal solutions. The best solution in our set might not be the best solution absolutely possible, but it still might be better than a human can do since humans generally are not able to solve for global maxima either (aka we're more fallible than an imperfect computer playing at chess).

It's really very interesting to think about how probabilistic solutions like this can be used to solve hard problems.

*Edited 2013-05-20 23:05 UTC*

Member since:

2011-01-28

theosig,

"This reminds me of Ray Kurzweil's stupid singularity thing, which seems to imply that the instant that computers are as fast as the human brain, they'll magically develop human intelligence. It doesn't matter how fast they are if we don't know the algorithms for human intelligence. And we still don't."

I don't know if I could believe that. I think we might eventually get computers that are convincing enough to mentally pass as human and be indiscernible in every non-physical test, and yet I'd still have alot of trouble considering that any such invention could be sentient because I "know" that it's not, but then again it's hard to understand what consciousness is at all.

"There's the same problem with compilers. I'm reminded of two events in computer history. One is LISP machines, and the other is Itanium. In both cases, hardware designers assumed that a 'sufficiently smart compiler' would be able to take advantage of their features."

I know what you mean, however it's not necessarily the case that we'd have to solve such problems directly. With brute forcing (or optimized variants like genetic algorithms) the programmer doesn't solve the problem at all, but writes a fitness function who's sole purpose is to rate the success of solutions that are derived in random and/or evolutionary ways.

There was a pretty awesome game I played (a java applet) many years ago where you would specify the fitness function, and the game would evolve 2d "creatures" with muscles and basic neurons and after a few thousand iterations you'd have creatures that could walk. More iterations and they could avoid obstacles. Maybe it would be possible to develop a genetic algorithm for the compiler as well. It's kind of what I meant earlier, even naive (unintelligent) approaches like this can produce good results, given enough iterations.