Why can I trust memory allocation?

I am currently taking an introductory CS course that uses C. In my tutorial, it is assumed that the pointer variable still contains the address for the previously allocated memory, even after free () was called on it - suppose I used to malloc (). Does this mean that parts of the memory become “locked” when malloc () is called, so my pointer data remains constant? What prevents other processes - say, google chrome or some kind of application - from mess with my variable? I could easily assign a value to an array with an invalid index, causing things to break. I could also access memory incorrectly, again using an array with an invalid index, giving me garbage or, if I was really lucky, a value that matters to me. What prevents a computer from growing into chaos!

+5
source share
5 answers

What prevents a computer from growing into chaos!

Modern processors have a mode of operation called protected mode with virtual memory - a regular program (process) runs in the so-called user mode and sees a memory space that is different from other currently running processes. The operating system then ensures that all such abnormal behavior is contained in one of these processes - although it is likely that such access will fail, it will only be contained in this process.

This was not true in older versions of Windows - while Windows 3 could use x86 protected mode, it ran all programs with the same privilege level - there the scary Blue Screen of Death could take the whole system down:

BSOD


As for the part that

My tutorial assumes that the pointer variable still contains the address for the previously allocated memory, even after free () was called on it - suppose I used malloc () earlier.

This is actually not the case. The C standard states that after calling free on a pointer, the value of the pointer itself becomes undefined. It may still point to the same address in your implementation, but all bets are disabled. The following program may even crash on any platform:

 void *ptr = malloc(42); free(ptr); // some other code that is between here... if (ptr) { ... } 

As the C standard says at 6.2.4p2

The pointer value becomes undefined when the object it points to (or has just passed) reaches the end of its life.

And Appendix J.2. Undefined behavior :

The value of a pointer to an object whose lifetime is used (6.2.4) is used.

One possible behavior can be caused by the compiler , knowing that the value is not required after free . Thus, if ptr in the above code was stored in a register, the compiler is free to overwrite the value for some code between them, and then the ptr variable could behave just like it was an uninitialized value by the time it was used in if .

Using a pointer whose value is indefinite will result in different related behaviors, such as confirmed here . The compiler does not have to generate the value you expect, it just has to conform to the standard.


In any case, the book is true that you should not use ptr after free(ptr) until you set a new value for it, it’s just that the specific example in the book is misleading since it’s just one of many possible results, as is usually the case for Undefined behavior in C.

+4
source

the pointer variable still contains the address for previously allocated memory, even after free() was called on it

It's right. The situation is called a dangling pointer. Your program cannot use these pointers; otherwise its behavior is undefined.

Does this mean that parts of the memory become “locked” when malloc () is called, so my pointer data remains constant?

They are locked only in the sense that malloc will not return the selected range to your program again until you free it. However, there is no built-in protection: if your program accidentally writes to a freed pointer, it can overwrite data in a legitimate variable, which leads to errors that are extremely difficult to catch without the right tools .

What prevents other processes - say, google chrome or some kind of application - from mess with my variable?

The fact that other applications work in a separate memory space. The hardware and OS ensure that other processes are locked out of your program’s memory.

+6
source

Back in the 1970s there were computers used by more than one person. Other users hated it when one of them crashed the system. Therefore, they invented a virtual machine. I do not like it when you hear it now, or well, yes, sort of.

Virtual memory, so that all programs can use the address 0x4000000 at the same time. Virtual processors so that many programs can run on the same processor with a time division. Virtual input and output devices.

All this was invented 50 years ago in Multics, VMS, Unix, IBM OS / 360, etc.

+4
source

What prevents other processes - say, google chrome or some kind of application - from mess with my variable?

All modern desktop / server operating systems guarantee that access to the memory of one program is impossible to any other program. This comes mainly from the configuration (OS-controlled) of the memory management unit in the CPU.

However, not all operating systems do this. DOS is not, mainly because it was intended as a single OS process; TSR programs can access everything on the entire machine, including foreground programs.

Some real-time operating systems, such as VxWorks (especially older versions), are multi-tasking, proactive operating systems, but do not provide separation between processes. Designers decided to do this in order to reduce context switching time, which is convenient in real-time OS.

+1
source

Here is one way to think about it. Think about how computer memory is divided into “pages”. Imagine that each “page” is an actual piece of paper that we write numbers in pencil. Imagine that all the pages that our program uses are stored in the filing cabinet, from page 0 to page N. Suppose we have an easy way to keep track of which pages in the storage cabinet are in use - maybe we are in the upper corner or something like that. Finally, imagine that paper is somewhat valuable: we never discard it. When we need new memory, it’s worth finding an existing page that is not in use, and erase everything from it, and use it again.

With this analogy in mind, we can answer your questions.

My tutorial assumes that the pointer variable still contains the address for the previously allocated memory.

Right When we “free” the memory page, we simply add up the top corner (or something else). But we are not erasing the page yet, because there is no need (this would be inefficient). We will wait until someone else selects the page later to delete it and write new numbers to it.

What prevents other processes - say, google chrome or some kind of application - from mess with my variable?

Based on the analogy that I have developed so far, nothing happens. If, in fact, all the programs on your computer directly accessed memory, there would have been no deterrence from exchanging information with each other and badly. Therefore, most computers do not actually set things up so that regular programs directly access memory.

Instead, almost all general-purpose computers today include a memory management unit (typically implementing virtual memory). As a result, each program receives its own cabinet. The program can interfere with all the pages in its storage cabinet, it can use them correctly or incorrectly and confuse itself if it really wants to, but the program simply cannot do anything with the pages in any other application cabinet. He cannot even glance at them, not to mention writing to them.

+1
source

Source: https://habr.com/ru/post/1265667/


All Articles