Are all program codes loaded into memory when the program is running, or just what needs to be downloaded?
Most modern OS will load "on demand", so the applicationβs starting point ( main ) will be loaded by the OS, and then the OS will just be there from there. When an application moves to a piece of code that is not already in memory, it loads this bit.
After that, this code \ instruction set will be replaced and the physical disk deleted, because the process receives processor time or loads the code to remain in memory while the program is running?
If the OS decides that some memory is required, it can eject part of the code and reload it when it is needed later [if it is needed again - if it was some part of the initialization, it can never get click again].
If two processes can share the same set of instructions, does this mean that each process receives a separate section of code in its virtual memory space?
Of course, you can split the code between multiple copies of the same application. Again, whether a particular OS depends on that OS or not. Linux certainly shares code copies from the same application between two (unrelated) processes [obviously, a forked process separates code by definition]. I believe that Windows does too.
Shared libraries (".so" and ".dll" files for Linux / Unix and Windows, respectively) are also used to share code between processes - the same shared library is used for different applications.
The data space is, of course, separate for each application, and shared libraries also get their own data section for each process sharing the library.
source share