How to measure the size of the mutual acceleration vector in shared memory?

I use boost :: interprocess :: vector to split some lines between processes, and I want to make sure that I do not overflow the shared memory segment in which it lives.

How do you know how much space a vector occupies in memory, and how much memory does a highlighted row allocated by a segment allocate?

typedef boost::interprocess::managed_shared_memory::segment_manager SegmentManager; typedef boost::interprocess::allocator<char, SegmentManager> CharAllocator; typedef boost::interprocess::basic_string<char, std::char_traits<char>, CharAllocator> ShmString; typedef boost::interprocess::allocator<ShmString, SegmentManager> StringAllocator; typedef boost::interprocess::vector<ShmString, StringAllocator> ShmStringVector; const size_t SEGMENT_SIZE = ...; addToSharedVector(std::string localString){ using namespace boost::interprocess; managed_shared_memory segment(open_only, kSharedMemorySegmentName); ShmStringVector *shmvector = segment.find<ShmStringVector>(kSharedMemoryVectorName).first; size_t currentVectorSizeInShm = ?????(shmvector); <-------- HALP! size_t sizeOfNewStringInSharedMemory = ?????(localString); <-------- //shared mutex not shown for clarity if (currentVectorSizeInShm + sizeOfNewStringInSharedMemory < SEGMENT_SIZE) { CharAllocator charAllocator(segment.get_segment_manager()); ShmString shmString(charAllocator); shmFunctionName = localString.c_str(); shmvector->push_back(shmString); } } 
+5
source share
1 answer
  • Quick and dirty

    You can make the shared memory a physically mapped file and see how many pages were actually written to disk. This gives you an approximate figure in many implementations, since the pages are most likely committed 1 at a time, and the usual size of the memory pages is 4 KB.

    I have another answer [1] which shows you the basics of this method.

  • You can use get_free_memory () in the segment manager. Note that this does not say that it allocates / just / for this vector, but it gives you a (possibly more useful) idea of ​​how much space is actually taken.

In another answer [2], I used this to compare the differences in memory overhead between continuous storage containers and node-based containers.

enter image description here

As you can see, individual distributions have a large overhead, and redistribution leads to fragmentation very quickly. Therefore worth a look

  • reserving space in advance to prevent reallocation.
  • using specialized Interprocess Boost allocators for more efficient use of the shared memory area

[1] see Files with memory, managed mapped files and offset index

[2] see Bad alloc discarded

+4
source

Source: https://habr.com/ru/post/1204446/


All Articles