Resolving processor synchronization issues

I am writing a simulation of a (very slow and primitive) processor.

For example: let them say that the clock frequency is 1 Hz. I guess this means that 1 command can / will be processed every second. Some instructions take longer than others. Adding 1 + 0 takes less time than 1 + 7. (The latter causes a ripple of the carry bits, which takes a non-zero time interval.)

I need to be able to execute instructions only after completing all other instructions.

If necessary:

  • how long does the longest command take and set the clock frequency more?
  • create an observer with a state that does not allow the future to execute the instruction until the previous one is complete.
  • Do I really not understand the problem?

In # 1, it seems to me that I am still risking that the racing training condition is incomplete before the next one begins. In # 2, it seems to me that I run the risk of an unpredictable / variable clock speed that can cause me problems later.

How can i solve this? Are there any hints on how a real processor deals with this problem?

+3
source share
4 answers

-, , , ALU ( ), - ALU, - . -. , . , .

( cat), , . , .

, , , , , , . , , PSpice MicroSim Uni, .

+3

+3

, . , !

, , , , , .. ( ).

, . , . , .

, .

+2

(1) - . ; x ns; - y ns. x < . (, , y , -, , x < y false).

Of course, if you make y too small, bad things happen. But it is quite realistic; this happens when you overclock chips. They become unstable as the clock frequency increases.

+1
source

Source: https://habr.com/ru/post/1705125/


All Articles