What are the factors by which virtual memory is limited?

I know that the size of virtual memory is limited only by the number of address lines. But in the insights and principles of the operating system by William Stallings, I read that virtual memory is also limited by the size of the secondary memory.
1.How?
2.Is replacing (between main memory and secondary memory) a prerequisite for virtual memory? I mean, if the exchange is not allowed, then we can still call it virtual memory, although the benefits will be limited?
Then I have few answers to questions based on the answer.

Edit:


I think I should quote the exact words of the book:

A storage allocation scheme in which secondary memory can be viewed as though it was part of the main memory. The addresses that the program can use for reference memory differ from addresses; the memory system uses to identify the physical storage of sites and program addresses are automatically translated to the corresponding machine addresses. The size of the virtual storage is limited by the addressing scheme of the computer system and by the amount of secondary memory available, rather than the actual amount of the primary location storage.

Is there any kind of pun in "virtual memory" and "virtual storage size"?

+4
source share
2 answers

The size of the virtual storage is limited by the addressing scheme of the computer system and the amount of available additional memory, and not by the actual number of main storage locations.

The book apparently assumes (incorrectly) that you will not allocate virtual memory that you do not plan to use. Thus, he warns that the physical memory and hard disk used for exchange limit the virtual memory used (of course, from the point of view of your process, as well as other requirements for this resource pool - the OS and other processes).

In practice, it is often useful to allocate more virtual memory than you could use, because you might want to, for example:

  • use virtual memory for a sparse array, where you directly point to a few scattered addresses,
  • Page crash error when system resources fail, which complicates your code by trying to track available memory (remember that this is a dynamic process with other processes, etc.) or a pessimistic limit, which means that you cannot aggressively use your system capabilities
  • let each program take advantage of the assumption that it was downloaded at the address for which it was compiled, so it can use an absolute address for jump instructions, etc., and not relative

Referring to your specific questions:

1. [virtual memory is also limited by the size of additional memory] How?

Again, it is limited in the sense that attempts to use will no longer be possible if the memory - both physical and swap - is exhausted.

2.Is changing (between main memory and secondary memory) a necessary condition for virtual memory?

It's a bit foggy ... virtual memory can only increase the total amount of memory processes that it can transparently use by swapping the contents of physical memory to make room for new memory needs, as well as for reloading the downloaded contents from secondary memory. But even if there is no disk space with a replacement (and, therefore, without swapping), or you do not have enough memory to perform any replacement, processes can still benefit from virtual addressing in accordance with sparse arrays, huge stack / heap areas with room for growth upon request, etc.

I mean, if the replacement is not allowed, we can still call it virtual memory, although the benefits will be limited?

May be. You can still use virtual addressing, but it depends on what terminology you accept, whether it classifies it as virtual memory: there is a reasonable argument in favor of the fact that β€œvirtual memory” means that you pretend to have more physical memory , so without a swap you won’t want, t, even if you can use a virtual addressing component that supports virtual memory.

+3
source

Regarding excerpts from the book, I can see the source of your confusion. I had to read it a couple of times to see what he was saying. A clearer explanation may be as follows: Virtual memory is an abstraction that allows a program to allocate memory without worrying about the physical limitations of the system on which it runs. Programs naively use virtual memory; abstraction (virtual memory) distinguishes between virtual memory locations that are mapped directly to physical locations and those that appear in secondary memory cells. Or it can appear absolutely nowhere, and you have a segfault at your fingertips.

The number 2 is definitely not true. Virtual memory exists, is "available" for use by programs, regardless of whether it has physical support or not. When he says that he is limited ... by the amount of secondary memory available, I do not quite follow this part. One could design a level of virtual memory that has 100 gigabisms of address space, and that would be fine.

If I trade correctness for clarity, then I apologize. My explanation was not very academic, and it looks like you are in school, but there you go. Regardless, hope this helps.

-tjw

+2
source

Source: https://habr.com/ru/post/1347670/


All Articles