Is new / malloc or delete / free to occupy or cancel cache lines?

I am curious about the behavior of the cache. The following are some cache related questions:

  • Does the write operation cache data? Given the purpose of A [i] = B [i], will A [i] be loaded into the cache? Since I'm just writing something in [i] instead of reading its meaning.

  • When allocating large memory, memory can come from the OS. And the OS will initialize the data to zero for security reasons ( Link ). If assignments put data in the cache (question 1), will this mechanism occupy the cache?

  • Suppose you have a dedicated array B, and all of B is now in the cache. Will the cache lines occupied by B be invalid (available) immediately after I free the array B?

Can someone tell me?

+6
source share
3 answers

From here https://people.freebsd.org/~lstewart/articles/cpumemory.pdf

-

  • Does the write operation cache data?

From the article:

By default, all data read or written by the CPU cores is stored in the cache. There are areas of memory that cannot be cached, but you only need to worry about OS executives; this is not visible to the application programmer. There are also instructions that allow the programmer to consciously circumvent certain caches. This will be discussed in section 6.

-

  1. When allocating large memory, memory can come from the OS. Will this mechanism occupy the cache?

Probably no. It will occupy the cache only after reading or writing data. From the article:

On operating systems such as Linux with query support, the mmap call only modifies the page tables ... Actual memory is not allocated during the mmap call.

Part of the allocation occurs when the memory page first gets access, either by reading, or writing data, or by executing code. In response to the next page failure, the kernel takes control and determines, using the page table tree, the data that should be present on the page. This page error resolution is not cheap, but it happens for every single page that the process uses.

-

3. Assume that there is a dedicated array B, and all B is now in the cache. Will the cache lines occupied by B be invalid (available) immediately after I free the array B?

from the article, cache line invalidation occurs only when there was a write operation on another CPU

What has evolved over the years is the MESI cache matching protocol (Modified, Exclusive, General, Invalid). The protocol is named after four states the cache line can be used when using the MESI protocol .... If the second processor wants to write to the cache line, the first processor sends the contents of the cache line and marks the cache line locally as invalid.

And a cache line can also be output:

Another detail of caches that programmers are not interested in is the cache replacement strategy. Most caches evict the Least Used (LRU) element.

And in my experience with TCMalloc free() not a compelling reason to evict memory from the cache. On the contrary, it can be detrimental to performance. On free() TCMalloc simply puts the freed memory block into its cache. And this memory block will be returned by malloc() next time the application requests a memory block. This is the essence of caching alliocator, such as TCMalloc. And if this memory block is still in the cache, it is even better for performance!

+3
source

This is an interesting article in which you will find more information (perhaps too much) about what you are asking:

What Every Programmer Should Know About Memory

Regarding your question, every operation you do in memory will be cached. As a programmer, you have no control over this (and the OS does not). Keep in mind that if you need to implement an algorithm for using space (memory), try increasing the locality of the memory.

So, assuming that you have to deal with 1 GB of data, try to break the calculations into clusters (data partitions) and try to perform all operations one section at a time. Thus, you will actually use data from the cache and do not have to access external memory every time. This can give you increased productivity.

+2
source

To answer the title question, no. These operations do not invalidate caches, and are completely deliberate. What happens to freeing up memory? There are two important cases. Firstly, the memory is immediately processed for the next distribution of programs. In this case, the effective address in the cache is effective, as this reduces the number of entries in the main memory.

But even if the memory is not reused, the memory allocator can combine free blocks behind the scenes or perform other operations. This is often related to writing the space previously allocated by your data. After all, freeing memory does not physically destroy memory. It just sets the owner. After delete ownership is transferred from your program at run time or to the OS.

+1
source

Source: https://habr.com/ru/post/989667/


All Articles