You’re working into the wee hours trying to fix that bug. But by time the debugger catches it the original cause has long since passed. How are you going to figure out just what went wrong a billion instructions earlier? It’s at times like these that you need a reversible computer. The idea is simple: a computer merely executes a sequence of elementary instructions. If we could just run through that list of instructions in reverse we could work backwards and find the original cause of our error. But of course things are never quite that simple.
I hear that virtual machines are great for this. Could one not run one underneath the host OS and have it keep a record of what has transpired? It’s not exactly something I’m familliar with… enlighten me.
That’s the first K5 article I’ve seen that didn’t result in an anti-American screed in the comments. Have they changed or something?
Reversible computing is an interesting idea, and probably worthy of serious study. However, I’m not convinced of its utility except in very specific application – generally, you want new data, not the old stuff.
-Erwos
Computation decreases entropy and hence consumes energy, so energy-free computation stands exactly where permanent motion stands. Just as understanding that friction is what wastes energy in mechanics, I find it very useful in programming, to remember that it is the DISCARDING of information that wastes computing ressources in general. Architectural trade-offs can place this expense on heat, manufacturing cost (die size) or time (throughput or latency), it is always the “forgetting” that costs.
The linked article is basically junk.
Unlike many posters who will say the same thing, I have at least some (graduate) coursework experience in reversible computing.
Reversible computing has nothing to do with rolling back the clock to find the causes of error. Most work on reversible computing is, as ElCabri indicated, concerning entropy.
The essential fact is that erasing a bit requires knowledge of that bit’s current state, which causes a decrease in entropy and therefore requires the input of energy. Many models for reversible computing exist–such as the billiard-ball model, which looks a lot like it sounds and forms the basis for many important ideas about how to compute with photons, or logical components such as the Toffoli gate, which is apparently universal. Advanced work is being done now on methods for discarding unwanted data without “erasing” it, and so on.
From a practical standpoint, it may never be possible to perform zero-cost computing. But then again it may. The universality of Toffoli gates implies that they can be used to construct the gamut of digital logic circuits, and one may, for example, imagine a superconducting computer in deep space, performing operations on Toffoli logic, apparently forever.
There’s no real design OR PROGRAMMING advantage to reversible logic–it’s much simpler to accept the infinitesimal energy cost of “forgetting” as ElCabri nicely puts it. The more interesting question here is whether zero-cost computation really is possible–or, in another way of speaking, whether thermodynamic and informational entropy are the same thing. Surprisingly, nobody really knows whether they are or not–and that leads to messy issues like that of Maxwell’s Demon, a proposed “perpetual motion” device that at least conceptually can exist if thermal and statistical entropy are not the same thing.
Reversible computing is a fascinating topic, and there’s a lot of information online available about it. The linked article is garbage. If you feel the urge to read about RC, do yourself a favor and google instead.
KOMPRESSOR
Billiard Ball Computing:
http://www.cs.berkeley.edu/~jimlin/bball/whatis.html
Toffoli Gate:
http://beige.ucs.indiana.edu/B679/node94.html
MIT’s AI Lab on Reversible Computing:
http://www.ai.mit.edu/~cvieri/reversible.html
Could someone elaborate on Maxwell’s Demon? I’ve read it often but it made no sense — it starts out from the assumption that the demon expends no energy doing its job of opening and closing the flap. At least the Feynman lectures on computation assume this, as well as the first few google links.
What is the explanation that the energy required for doing its job is irrelevant? Can we seriously assume no work is expended?
I could try and explain my understanding of it, but perhaps I’ll let people who understand it better than me do the talking.
I will say that my basic understanding is that the real-world analogue of the demon would likely have the “opening” or “shutting” as a fundamental property; can we say that a polarizing filter “opens” or “closes” depending on the polarization of light?
http://www.auburn.edu/~smith01/notes/maxdem.htm
http://www.chem.uci.edu/education/undergrad_pgm/applets/bounce/demo… (this one’s very good and seems to provide an example of a demon where the “opening and closing” of the flap wouldn’t cost any energy)
http://www.maxwellian.demon.co.uk/name.html
oh, hell:
http://www.google.com/search?hl=en&ie=UTF-8&oe=UTF-8&q=maxwell%…
KOMPRESSOR
Sometimes people who specialize in a field are the people who are missing the big picture. Reversible computing *is* used to efficiently roll back clocks to find an error. Do some research and check out the multitude of theorems and papers on the subject. Even a basic google search reveals that much (hint: use keywords like reversible, rollback, checkpointing, computation, simulation). Reversible computation is a much bigger field than just the handful of people who study the entropy of computing devices. Doing a bit of graduate work in a field is no substitute for actually using it in the real world and seeing how people put ideas into practice. Or maybe the author just hallucinated the work on reversing the flow of computation in Haskell and the links are all fake.
“There’s no real design OR PROGRAMMING advantage to reversible logic”. There are plenty of researchers who disagree with that. In fact, you can follow the links within the lined story to see that!
…do not mean what you apparently think you mean, “the”. They provide reversability in transactions, not in computation. You do not work backwards to a checkpoint, you resume forward computation from a known-good checkpoint.
The link did not indicate that the reverse computation in Haskell had been put into practical use, merely that it was possible to use Haskell to generate the adjoint. You appear to have hallucinated a different conclusion.
Let me pause to note that the reversability of Haskell is more a consequence of its functional nature than a deliberate design decision. Or did you already know that?
I loved your line about graduate work. Let me guess–you never made it back to school.
Here’s a hint. When discussing theoretical topics, people who study the theory in question have more domain knowledge than “real-world” programmers, the majority of whom hack away at business logic, not reversible logic.
Are you a habitual slashdot poster? You’ve got that “just enough knowledge to act cocky” aura about you.
KOMPRESSOR
The article isn’t about the tiny area you pretend to know something about. It is a far more general article about reverse mode computations in general. That includes adjoint code, alternative state transformation monads and also reversible computation. Check the linked article on simulating networks. It uses reversible computation. All of these subjects share themes such as the problems of dealing with excess data generated by reversible code. If you can’t see the connections it’s probably best that you stay in your little niche where you probably have skills quite adequate to getting a certain amount of low-level research done.
the, Ph.D.
I get the eerie feeling of listening in to just so many CS graduates of the type you should never hire for your project if you ever want to get a job done.
Hello?
Distress call from the real world: Does this have any value for the work to be done today, tomorrow, or even five year’s time?
Distress call from the real world: Does this have any value for the work to be done today, tomorrow, or even five year’s time?
There was once a very brilliant Indian mathematician, Ramanujan. One thing he loved was prime numbers. The person who brought him to England, GH Hardy, chided him for pursuing such useless avenues, and tried hard to focus him on more Relevant issues.
Of course, now it turns out that prime numbers are used all over the place, and at the moment make e-commerce possible. Had he more time to pursue useless fancies, who knows what subjects he may have illuminated.
It’s often the case that CS grads aren’t often the best programmers. Should they be?
No offense intended. I just wanted to make sure I wasn’t dropped into some alternate reality. 😉
“It’s often the case that CS grads aren’t often the best programmers. Should they be?”
No. But more often than not, they make for superior software _design_, due to their superior (hopefully!) knowledge of theory.
-Erwos
This has nothing to do with reversible computing, but it is one of my favorite Haskell related sites.
http://conal.net/Pan/Gallery/default.htm
Yesterday when I was on the second day of my bug search, I thought of how neat it would be to have a time track á la Adobe Premiere (or most of the video editing tools). Then I wake up the next day and read this. How sweet