Linux page table management and MMU

I have a question about the relationship between the Linux kernel and the MMU. Now I understand that linux kernel manages a page table between virtual memory addresses and physical memory addresses. At the same time, there is an x86 MMU that manages a page table between virtual memory addresses and physical memory addresses. If the MMU is presented next to the processor, do I still have to take care of the page table?

This question may be silly, but another question: if the MMU cares about memory space, who manages high memory and low memory? I believe that the kernel will receive the size of virtual memory from the MMU (4 GB in 32 bits), then the kernel will distinguish between user space and kernel space in the virtual address. I'm right? or completely wrong?

Thank you very much!

+6
source share
1 answer

Responsibility for managing the OS page and the MMU are two sides of the same mechanism that lives on the border between architecture and microarchitecture.

The first side defines the β€œcontract” between the hardware and software that runs on it (in this case, the OS). If you want to use virtual memory, you need to build and maintain a page table as described in this contract article. On the other hand, the MMU side is a hardware unit that is responsible for performing HW address translation tasks. This may or may not include hardware optimization, they are usually hidden and can be implemented in various ways to run under the hood, if it supports the hardware part of the contract.

Theoretically, the MMU may decide to issue a set of memory accesses for each translation (page step) in order to achieve the desired behavior. However, since this is a critical element, most MMUs optimize it by caching the results of previous page steps inside the TLB, similar to how the results of previous hits are stored in the cache (in fact, in some implementations the caches themselves may also store some of the access to the page table, since it is usually in cached memory). An MMU can manage multiple TLBs (most implementations separate them for data and code pages, and some have second-level TLBs) and provide translation from there without noticing it, except for faster access times.

It should also be noted that the hardware should protect against many angular cases that could prejudice the consistency of such "caching" of TLBs of previous translations, such as page aliases or reassignments during use. On some machines, for more annoying occasions, even requiring a massive stream of flow, called TLB firing.

+5
source

Source: https://habr.com/ru/post/975771/


All Articles