I am writing a simulation of a (very slow and primitive) processor.
For example: let them say that the clock frequency is 1 Hz. I guess this means that 1 command can / will be processed every second. Some instructions take longer than others. Adding 1 + 0 takes less time than 1 + 7. (The latter causes a ripple of the carry bits, which takes a non-zero time interval.)
I need to be able to execute instructions only after completing all other instructions.
If necessary:
- how long does the longest command take and set the clock frequency more?
- create an observer with a state that does not allow the future to execute the instruction until the previous one is complete.
- Do I really not understand the problem?
In # 1, it seems to me that I am still risking that the racing training condition is incomplete before the next one begins. In # 2, it seems to me that I run the risk of an unpredictable / variable clock speed that can cause me problems later.
How can i solve this? Are there any hints on how a real processor deals with this problem?
source
share