I forgot, I even wrote this question.
Some explanations in the first place:
- Non-PIC code can be loaded by the OS at any position in memory in [most] modern OSs. After everything is loaded, it goes through a phase that captures the text segment (where the executable ends), so it correctly addresses global variables; To remove this, the text segment must be writable.
- PIC executable data can be loaded once by the OS and distributed among several users / processes. However, for the OS, this text segment should be read-only, which means no fixes. The code is compiled to use the global offset table (GOT), so it can address global variables relative to the GOT, reducing the need for correction.
- If a shared facility is built without PIC, although it is highly recommended, it does not seem to be strictly necessary; if the OS needs to fix the text segment, then it is forced to load it into memory, which is marked as read-write ..., which prevents the sharing of processes / users.
- If the executable binary is built / s / PIC, I don’t know what will go wrong under the hood, but I witnessed that some tools became unstable (mysterious crashes, etc.).
Answers:
- Mixing PIC / non-PIC or using PIC in executable files can make prediction and tracking instabilities difficult. Why I don’t have a technical explanation.
- ... include segfaults, bus errors, stack corruption, and possibly more.
- Non-PICs in shared objects are unlikely to cause serious problems, although this can lead to more RAM if the library is used many times for processes and / or users.
update (4/17)
Since then, I have discovered the cause of some of the accidents that I saw earlier. To illustrate:
#include <map> typedef std::map<std::string,std::string> StringMap; StringMap asdf; /*file1.cc*/ #include "header.h" /*file2.cc*/ #include "header.h" int main( int argc, char** argv ) { for( int ii = 0; ii < argc; ++ii ) { asdf[argv[ii]] = argv[ii]; } return 0; }
... then:
$ g++ file1.cc -shared -PIC -o libblah1.so $ g++ file1.cc -shared -PIC -o libblah2.so $ g++ file1.cc -shared -PIC -o libblah3.so $ g++ file1.cc -shared -PIC -o libblah4.so $ g++ file1.cc -shared -PIC -o libblah5.so $ g++ -zmuldefs file2.cc -Wl,-{L,R}$(pwd) -lblah{1..5} -o fdsa
This specific example may not end in failure, but basically this is the situation that existed in this group code. If he commits , he will most likely end up in the destructor, usually a double mistake.
Over the past few years, they added -zmuldefs to their assembly to get rid of multi-valued character errors. The compiler emits code to run constructors / destructors on global objects. -zmuldefs makes them live in the same place in memory, but it still runs the constructors / destructors once for exe and each library that includes the offending header - hence dual access.
source share