How does a Linux server manage physical memory with a capacity of less than 1 GB?

I study the inside of the Linux kernel and, while reading Understanding Linux Kernel, I was struck by a lot of memory related issues. One of them is how the Linux kernel handles memory mapping if physical memory is installed on my system, say, just 512 MB.

As I read, the kernel maps 0 (or 16) MB-896MB physical RAM to the linear address 0xC0000000 and can directly access it. So, in the case described above, when I have only 512 MB:

  • How can the kernel display 896 MB in total 512 MB? In the described scheme, the kernel configures everything so that each process page table maps virtual addresses from 0xC0000000 to 0xFFFFFFFF (1GB) directly to physical addresses from 0x00000000 to 0x3FFFFFFF (1 GB). But when I have only 512 MB of physical memory, how can I match virtual addresses from 0xC0000000-0xFFFFFFFF to physical 0x00000000-0x3FFFFFFF? Point: I have a physical range of only 0x00000000-0x20000000.

  • What about user mode processes in this situation?

  • Each article only explains the situation when you installed 4 GB of memory, and the kernel maps 1 GB to kernel space, and user processes use the remaining amount of RAM.

I would appreciate any help in improving my understanding.

Thank!..

+42
arm linux-kernel kernel embedded-linux linux-device-driver
Dec 24 '10 at 22:28
source share
5 answers

Not all virtual (linear) addresses must be mapped to anything. If the code accesses the page without popping up, the page error increases.

A physical page can be mapped to multiple virtual addresses simultaneously.

There are 2 partitions in the 4 GB virtual memory: 0x0 ... 0xbfffffff is the virtual memory of the process and 0xc0000000 .. 0xffffffff is the virtual memory of the kernel.

  • How can the kernel display 896 MB with only 512 MB?

It displays up to 896 MB. So, if you have only 512, only 512 MB will be displayed.

If your physical memory is in the range of 0x00000000 to 0x20000000, it will be displayed for direct access to the virtual addresses 0xC0000000 to 0xE0000000 (linear mapping).

  • What about user mode processes in this situation?

Physical memory for user processes will be mapped (rather than sequentially, but rather randomly mapped between pages) to virtual addresses 0x0 .... 0xc0000000. This mapping will be the second display for pages from 0..896MB. Pages will be taken from free page lists.

  • Where are user mode processes in physical RAM?

Anywhere

  • Each article only explains the situation when you installed 4 GB of memory and

No. Each article explains how 4 GB of virtual address space is displayed. The virtual memory size is always 4 GB (for a 32-bit machine without memory extensions such as PAE / PSE / etc for x86)

As indicated in 8.1.3. Memory Zones 8.1.3. Memory Zones by Robert Love's Linux Kernel Development book (I use the third edition), there are several physical memory zones:

  • ZONE_DMA - contains frames of pages with memory below 16 MB.
  • ZONE_NORMAL - contains frames of pages with memory above 16 MB and below 896 MB.
  • ZONE_HIGHMEM - contains frames of pages with memory and size 896 MB

So, if you have 512 MB, your ZONE_HIGHMEM will be empty and ZONE_NORMAL will have a 496 MB physical memory card.

Also pay attention to section 2.5.5.2. Final kernel Page Table when RAM size is less than 896 MB 2.5.5.2. Final kernel Page Table when RAM size is less than 896 MB books. This is the case when you have less memory than 896 MB.

In addition, there is some description of the location of virtual memory for ARM: http://www.mjmwired.net/kernel/Documentation/arm/memory.txt

Line 63 PAGE_OFFSET high_memory-1 is the direct mapped portion of memory

+39
Dec 24 '10 at 22:40
source share

The hardware provides a memory management unit . This is part of the scheme, which is able to intercept and modify any access to memory. Whenever the processor accesses RAM, for example, to read the next command to execute or as access to data initiated by the instruction, it does this at some address, which, roughly speaking, has a 32-bit value. A 32-bit word can have a bit more than 4 billion different values, so there is a 4 GB address space: the number of bytes that can have a unique address.

Thus, the processor sends a request to its memory subsystem, as it "selects a byte at address x and returns it to me." The request goes through the MMU, which decides what to do with the request. MMU actually parses 4 GB of space per page; The page size depends on the equipment you use, but typical sizes are 4 and 8 kB. The MMU uses tables that tell what to do with hits for each page: either access is granted with a rewritten address (the page entry says: β€œyes, the page with address x exists, it is in physical memory at address y”) or rejected, and at this point, the kernel is called for future use. The kernel can decide to kill the breach process or do some work and modify the MMU tables so that this query can be tried again, this time successfully.

This is the basis for virtual memory: from the point of view, the process has some RAM, but the kernel moved it to the hard drive in "swap space". The corresponding table is marked as "missing" in the MMU tables. When a process accesses its data, the MMU calls the kernel, which retrieves the data from the swap, returns it to some free space in the physical RAM, and changes the MMU tables to that place in this space. Then the kernel returns to the process, right in the instruction that runs it all. The process code does not see anything in common with the whole business, except that accessing memory takes quite a lot of time.

The MMU also processes access rights that prevent a process from reading or writing data belonging to other processes or the kernel. Each process has its own set of MMU tables, and the kernel manages these tables. Thus, each process has its own address space, as if it were alone on a machine with 4 GB of RAM - except that the process had better access to memory, which it did not rightfully allocate from the kernel, since the corresponding pages are marked as absent or forbidden.

When a kernel is called through a system call from a process, the kernel code must be executed in the address space of the process; therefore, the kernel code must be somewhere in the address space of each process (but protected: MMU tables prevent access to kernel memory from unprivileged user code). Since the code may contain hard-coded addresses, the kernel must be at the same address for all processes; conditionally, in Linux this address is 0xC0000000. The MMU tables for each process card, that part of the address space for any physical RAM blocks the kernel, were actually loaded at boot time. Note that kernel memory is never replaced (if the code that can read data from the swap space itself has changed, everything will be too sour).

On a PC, things can be a little more complicated, because there are 32-bit and 64-bit modes, segment registers and PAE (which acts as a kind of second-level MMU with huge pages). The basic concept remains unchanged: each process gets its own idea of ​​a virtual 4 GB address space, and the kernel uses MMU to map each virtual page to the corresponding physical position in RAM or anywhere else.

+14
Jan 05 2018-11-11T00:
source share

osgx has a great answer, but I see a comment where someone still doesn't understand.

Each article only explains the situation when you installed 4 GB of memory and the kernel maps 1 GB to the kernel space and user processes use the remaining amount of RAM.

Here is a big confusion. There is virtual memory and there is physical memory . Each 32-bit processor has 4 GB of virtual memory. The traditional Linux kernel partition was 3G / 1G for user memory and kernel memory, but newer options allow partitioning.

Why distinguish between kernel and user space? - my own question

When the task swaps, the MMU needs to be updated. Kernel MMU space must remain unchanged for all processes. The kernel should handle interrupts and crash requests at any time.

How does virtual mapping work? - my own question.

There are many permutations of virtual memory.

  • one personal mapping to a physical RAM page.
  • Duplicate virtual mapping to one physical page.
  • a mapping that generates a SIGBUS or other error.
  • mapping supported by disk / swap.

From the list above, it's easy to see why you might have more virtual address space than in physical memory. In fact, the error handler usually checks the process memory information to see if the page is displayed (I mean allocated to the process), but not in memory. In this case, the fault handler will invoke the I / O subsystem to read on the page. When the page has been read and the MMU tables are updated to indicate the virtual address to the new physical address, the process that caused the failure resumes.

If you understand the foregoing, it becomes clear why you would like to have a larger virtual display than physical memory. So memory sharing is supported.

There are other uses. For example, two processes can use the same code library. It is possible that they are associated with different virtual addresses in the process space due to binding. In this case, you can map different virtual addresses to the same physical page to save physical memory. This is fairly common for new distributions; they all point to the physical "page zero". When you touch / record memory, the zero page is copied and a new physical page is assigned (COW or copy when recording).

It is also useful to have virtual pages with an alias with a cache , and another without caching . You can view these two pages to see what data is cached and what is not.

Mostly virtual and physical do not match! Easily claimed, but often confused when viewing VMM Linux code.

+3
Sep 10 '14 at 19:46
source share

-

Hi, in fact, I do not work on the x86 hardware platform, so some technical errors may occur in my message.

As far as I know, the range between 0 (or 16) MB - 896 MB is specifically indicated, while you have more RAM than this number, say, you have 1 GB of physical memory on your board, which is called "low memory". If you have more physical memory than 896 MB on your board, then the rest of the physical memory is called highmem.

Speaking of your question, your board has 512 MB of physical memory, so in fact there is no 896, no highmem.

The common RAM core can see and can also display 512 MB.

'Because there is a 1 to 1 mapping between the physical memory and the virtual address of the kernel, so there is a 512 MB virtual address space for the kernel. I'm really not sure if the previous phrase is correct, but that is what I think.

What do I mean, if there is 512 MB, then the amount of physical RAM that the kernel can control is also 512 megabytes, then the kernel can not create such a large address space as beyond 512 MB.

Refer to the user space, there is one other point, the pages of the user application can be replaced on the hard drive, but the kernel pages cannot.

So, for user space, using page tables and other related modules, it looks like there is still 4 GB of address space. Of course, this is a virtual address space, not a physical RAM space.

This is what I understand.

Thank.

+2
Jul 30 2018-12-12T00:
source share

If the physical memory is less than 896 MB, then the linux kernel maps to this lineraly physical address.

See here for more details. http://learnlinuxconcepts.blogspot.in/2014/02/linux-addressing.html

+1
Mar 09 '14 at 12:54 on
source share



All Articles