Many aspects of C relate to the desire to allow single-pass compilation. Many early compilers read small source code, generated some compilation or machine code, forgot most of what they just read, read a little more code, generated some more assembly / machine codes, etc. If the compiler generates machine code, it may need to create a list of back patches for things like direct jumps, but the compiler can generate code for functions that were larger than its available RAM.
Many machines have several registers that can be used to store values, but the compiler cannot find out which variables could be most conveniently stored in registers at any given point in the code, unless it knows how the variables will be used later in the code . Given something like:
void test(void) { int i,j,*p; p=&i; i=j=0; do { j++; *p+=10; j++; ...
a single-pass compiler might not know if it can safely store j in a register through access to *p . Reset j to memory up to *p+=10; and reloading after it would deny most of the benefits of register allocation to it, but the compiler missed the reset and reload, but this code was followed by p=&j; he would have a problem. All passes of the loop after the first should contain j in memory when *p+=10; , but the compiler would have already forgotten the code that would be needed for the second pass.
This problem was resolved by indicating that if the compiler is declared register , the compiler can safely generate code that assumes that it will not be affected by pointers. The ban on receiving the address was IMHO unnecessarily excessive (*), but it was easier to describe than the one that would allow using the qualifier in more circumstances.
(*) Semantics would be useful even today if register promises that the compiler can safely store a variable in a register if it flushes it into memory when its address has been taken and is held upon reboot until the next time code used a variable forked back [via looping construct or goto] or entered a loop in which the variable was used].
source share