I have two applications (processes) running under Windows XP that exchange data through a memory mapped file. Despite all my efforts to remove restrictions on iterative memory, I still get about 10 errors per page when transmitting data. I tried every flag in CreateFileMapping () and CreateFileView () and this is still happening. I'm starting to wonder if files with memory mapping work.
If someone knows the details of the O / S implementation for memory-mapped files, I would be grateful for comments on the following theory: if two processes share a file with memory and one process writes to it and the other reads it, then O / S Marks pages written as invalid. When another process moves on to reading the areas of memory that now refer to invalid pages, this leads to a software page error (by design), and O / S knows to reload the invalid page. In addition, the number of software page errors is therefore directly proportional to the size of the data record.
My experiments seem to confirm the above theory. When I exchange data, I write one continuous block of data. In other words, the entire shared memory area is overwritten each time. If I make the block larger, the number of errors on the soft page will increase accordingly. So, if my theory is correct, I can do nothing to eliminate soft page errors that are not related to the use of memory-mapped files, because that is how they work (using program page errors to maintain page consistency). The irony is that I decided to use a memory mapped file instead of connecting to a TCP socket, because I thought it would be more efficient.
Please note that if errors in the program page are harmless, pay attention to this. I heard that at some point, if the number is excessive, system performance may be overshadowed. If the errors of the soft page are essentially not harmful, then if anyone has any recommendations as to what number is “excessive” per second, I would like to hear that.
Thank.
source
share