If mmap is faster than accessing an obsolete file, where do we see time savings?

I understand the use of mmap. Consideration of a simple read / write operation in a file includes opening a file and allocating a buffer, reading [which requires a context switch], and then the data available to the user in the buffer, and changes to the buffer do not reflect into the file if it is not written explicitly.

Instead, if we use mmap, direct access to the buffer is nothing more than an attachment to a file.

Question:

1) The file is located on the hard disk, mmaped in the process. Every time I write to mmaped memory, is it written directly to the file? . In this case, it does not require any context switching , because changes are made directly in the file itself. If mmap is faster than accessing an obsolete file, where do we see time savings?

Please explain. correct me if I am wrong.

+4
source share
1 answer

File updates do not appear immediately on disk, but appear after unmap or after msync . Consequently, during updates, there is no system call, and the kernel is not involved. However, since the file is read lazily across pages, if necessary, the OS may require some reading content when you cross page borders. The most obvious benefit of memory mapping is that it eliminates kernel space for copies of user space data. There is also no need for system calls to search for a specific position in a file.

+6
source

Source: https://habr.com/ru/post/1447868/


All Articles