Why is the initial state of the interrupt flag 6502 a 1?

I emulate a 6502 processor and I am almost done (at the testing stage right now) and I use some NES test from nesdev site and it tells me that both the interrupt flag and the unused one are supposed that the 5th flag should be set to 1 (i.e. disable interrupt), but why? I can understand the unused flag part, since it ... well ... is not used, but I do not understand the interrupt flag. I tried a search on Google and some sites confirm that it should be set to 1, but no one explains the reason for this. Why should interrupts be blocked from the very beginning of the program?

+4
source share
2 answers

When the power is turned on, the "unused" bit in the status register is rigidly tied to the logic "1" according to the internal CPU circuit. It can never be anything other than β€œ1”, since it is not controlled by any internal flag or register, but is determined by the physical connection to the β€œhigh” signal line.

The "I" flag in the status register is initialized by the CPU Reset "1" logic and, of course, can be changed by the "SEI" and "CLI" program instructions, as well as by the processor itself (for example, during IRQ processing). The reason the default state is β€œ1” (thus setting the interrupt flag) is because the host system can execute the / reset startup code without having to consider and arrange IRQ statements.

Many 6502 host systems depend on some external trigger source for IRQ and NMI claims - often it would be a VIA or CIA companion chip specifically developed by MOS Technology as interface adapters with custom timers and other event handlers designed to run seamlessly with 6502 for raising interrupts in response to predetermined hardware conditions. These proprietary companion chips require some software-controlled configuration to set them to a known state in order to start viewing hardware events and increase interrupts accordingly.

Since these chips can be hardware-initialized to potentially undefined states, 6502 does not want to immediately start serving interrupts from them, because these interrupts can be completely false. By default, the β€œI” flag is 'on', the CPU starts a β€œReset”, knowing that the software can initialize the rest of the host system, including supporting chips such as VIA and CIA, without the possibility of a false IRQ that occurs before the whole system, is in a state in which they can be processed. As an example, consider a scenario in which the IRQ vector of the CPU in ROM indicates the direction vector in RAM, which is initialized by the address of the IRQ service routine with the Reset code. If the IRQ had to happen before the Reset code initialized the RAM vector, it would almost certainly point to a random address (perhaps, but not guaranteed to be $ 0000), and it is likely that a system crash will occur. With the β€œI” flag set by default, IRQ cannot occur until the program issues a β€œCLI” that would be after the RAM vector address was correctly initialized to indicate a valid IRQ service.

If you study the general 6502 Reset code examples, you will see the recurring theme of a set of system initialization routines to configure the host environment (including support-chip timer registers for generating IRQs), and then the β€œCLI” instruction is one of the last things the code does. Most environments are typically IRQ-oriented, do their homework and service procedures at exact intervals (for example, once per video frame), so the Reset code ends with β€œCLI” to indicate that initialization, including setting up IRQ generation, is complete and the IRQ service can begin.

Now, having said all this, what to stop NMI from approving at any time during Reset processing, hmm? The CPU will zealously pause the Reset program and jump over the NMI ROM vector - and the "I" flag has no effect (as expected - NMI is Non Maskable and cannot be ignored). Thus, ironically, although the β€œI” flag is set to β€œ1” by default to protect the Reset code from false or premature IRQs, there is and always is the possibility of a false NMI that cannot be blocked, and therefore can spawn the same problem, if the vector indicates RAM (directly or indirectly).

The programmer's task is to find a way to manage such untimely NMIs in such a way that, if they occur, they do not influence or at least do not interfere with Reset processing. And therefore, it is possible if the software should satisfy this scenario, it is not much more effort to do the same for IRQ - this means that by default the flag β€œI” to β€œ1” could be removed from the initialization circuit CPU, or, alternatively, that NMIs must be tightly bound to ignore during RESET. But then, of course, they would not be Non Maskable in all cases, and you will need a special RESET 'flag in the status register, which you can clear to tell the CPU that the Reset processing is complete, and now the NMI can be serviced as usual. But I was distracted.;)

+9
source

Typically, a machine needs to configure its global state before it can safely receive an interrupt. If interrupts were originally enabled, you will never know what was initialized and what was not in your interrupt routine.

Thus, this allows a known state to be resolved before events begin to roll.

In NES specifically, this probably doesn't matter much - the embedded hardware generates non-scalable interrupts and does not do this until it is told to get started. Most cartridges with standard interrupt-generating equipment also need to be informed in advance in order to start generating interrupts and not just do this when turned on.

However, this behavior of 6502 is common to the part. An example problem they might be trying to avoid might be a system with a two second startup time and a keyboard that generates interrupts. The interrupt routine can buffer keystrokes. But if he tries to do this before the system is configured differently, this can lead to writing bytes to a random location in memory.

+4
source

Source: https://habr.com/ru/post/1484356/


All Articles