Thinking about it, if they are capable of getting the computer to simulate the complete physics behind all the operations in the chip, why stop there?
I mean, is it feasible, in the future at least, to use the simulated stuff to implement an emulator? ie the full cycle — to make the perfect emulator by perfect simulation and finally compiling the data into emulation code. In an portable way. And compiler-optimised.
While we are at it, it does not have to stop there either. Quirks, esp bugs, can be sorted out for correction — we can “cure” level one bugs and so on. Then, we can use that “cured” structure to create a new FPGA that is cleaner. Or something.
That, obviously, depends on the availability of FPGA compilers. Given this amazing bit, we will then be able to daisy-chain these toolchains and evolve the structure. SkyNet will then be awakened.
You would still need to feed the emulator with the code that executes all possible (discovered) quirks of the simulated cpu to let the system learn it’s behavior.
I believe the onus lies on the simulator. The physics simulation inputs a lot of information that are not exactly crucial to the workings of the system. If we can make the physics simulator automatically make a run with all possible inputs, a table of all possible outputs can be made.
With that table of possible outputs, linked to inputs, we can generate (the important part is that we can generate without human input) a set of equivalent instructions for the base case. It should be the same as all the assembly instruction set. Then, the same table of outputs can be used to determine all the quirks to add on top of the base. This allows us to create emulator code purely untouched by humans.
The only thing that requires human intervention is to categorise and selectively fix the quirks. (I do not even begin to envision that computers can determine whether quirks were intentional or not.)
It may take some time, but I think the results will be worth it. Not to mention that a clean table of input to output map will help to test automated FPGA compilers. It can even be incorporated with evolution — make a clean table, compile the FPGA, compile the table of input output map for that FPGA, evolve new design that is less dirty, rinse, repeat until cleanliness is within required error bounds. SkyNet is awaiting us.
I’m hacking away on the ZX81’s ULA ATM – sadly not at this sort of level – software simulation level and attaching the output of that to a CRT simulator. The whole thing needs to emulate at 6.5Mhz and even on my dual core 2.1Ghz machine that can be a hard ask when you REALLY start emulating a system (display and all!)
Thinking about it, if they are capable of getting the computer to simulate the complete physics behind all the operations in the chip, why stop there?
I mean, is it feasible, in the future at least, to use the simulated stuff to implement an emulator? ie the full cycle — to make the perfect emulator by perfect simulation and finally compiling the data into emulation code. In an portable way. And compiler-optimised.
While we are at it, it does not have to stop there either. Quirks, esp bugs, can be sorted out for correction — we can “cure” level one bugs and so on. Then, we can use that “cured” structure to create a new FPGA that is cleaner. Or something.
That, obviously, depends on the availability of FPGA compilers. Given this amazing bit, we will then be able to daisy-chain these toolchains and evolve the structure. SkyNet will then be awakened.
You would still need to feed the emulator with the code that executes all possible (discovered) quirks of the simulated cpu to let the system learn it’s behavior.
Actually, I think it is the other way round.
I believe the onus lies on the simulator. The physics simulation inputs a lot of information that are not exactly crucial to the workings of the system. If we can make the physics simulator automatically make a run with all possible inputs, a table of all possible outputs can be made.
With that table of possible outputs, linked to inputs, we can generate (the important part is that we can generate without human input) a set of equivalent instructions for the base case. It should be the same as all the assembly instruction set. Then, the same table of outputs can be used to determine all the quirks to add on top of the base. This allows us to create emulator code purely untouched by humans.
The only thing that requires human intervention is to categorise and selectively fix the quirks. (I do not even begin to envision that computers can determine whether quirks were intentional or not.)
It may take some time, but I think the results will be worth it. Not to mention that a clean table of input to output map will help to test automated FPGA compilers. It can even be incorporated with evolution — make a clean table, compile the FPGA, compile the table of input output map for that FPGA, evolve new design that is less dirty, rinse, repeat until cleanliness is within required error bounds. SkyNet is awaiting us.
The visual 6502 at http://visual6502.org/ is cool!
Superb piece of hackery!
I’m hacking away on the ZX81’s ULA ATM – sadly not at this sort of level – software simulation level and attaching the output of that to a CRT simulator. The whole thing needs to emulate at 6.5Mhz and even on my dual core 2.1Ghz machine that can be a hard ask when you REALLY start emulating a system (display and all!)