Maybe you haven’t worked with state machines since your college computer science courses. Jon Shemitz offers a reason to dust off the technique with .NET: object-oriented state machines can be easier to read and debug than their enum and switch equivalents.
FSM ( Finite State Machines ) are of everyday use in Hardware Modeling Languages ( e.g. VHDL & Verilog ), using enumerated states and case clauses. In those PLs geared toward hardware compiling, every treatment ( say, from an simple addition to a part of a complex SDRAM controller ) is implemented as a separate process and runs in parallel. Every evolution of a input of the process triggers the re-evaluation of the computation ( that is how digital HW simulators work. Of course, in real systems, electricity flows continuously ๐ .
That massively parallel programming style is quite elegant for many tasks. The fact that the simulators automatically triggers some kinds of events to propagate the state of the system between living creatures is appealing.
The fact that the simulators automatically triggers some kinds of events to propagate the state of the system between living creatures is appealing.
Interestingly, I’m doing exactly this, as a large/parallel behavioral system for controlling legged robots, with fair success. It’s not revolutionary, I’m just experimenting with implementing my own subsumption architecture as designed by Rodney Brooks in the 80’s. Fun stuff.
http://home.earthlink.net/~zakariya/files/Wandering.png
it also leads to interesting stuff like “learning” mechanisms that can predict what an underlying state machine would generate. It’s sort of like “muscle memory”.
Of course, every computer program is a state machine ๐
Well, only if your program has some internal memorisation of the evolution of inputs and outputs.
The “Hello World” program is not a state machine.
( Of course, anyone could prove the contrary as well … )
Maybe the ‘yes’ UNIX command is the most stateless softfware.